00:00:00.001 Started by upstream project "autotest-spdk-master-vs-dpdk-v23.11" build number 836 00:00:00.001 originally caused by: 00:00:00.002 Started by upstream project "nightly-trigger" build number 3496 00:00:00.002 originally caused by: 00:00:00.002 Started by timer 00:00:00.002 Started by timer 00:00:00.014 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/raid-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.015 The recommended git tool is: git 00:00:00.015 using credential 00000000-0000-0000-0000-000000000002 00:00:00.017 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/raid-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.028 Fetching changes from the remote Git repository 00:00:00.030 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.045 Using shallow fetch with depth 1 00:00:00.045 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.045 > git --version # timeout=10 00:00:00.071 > git --version # 'git version 2.39.2' 00:00:00.071 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.117 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.117 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:02.270 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:02.282 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:02.295 Checking out Revision 53a1a621557260e3fbfd1fd32ee65ff11a804d5b (FETCH_HEAD) 00:00:02.295 > git config core.sparsecheckout # timeout=10 00:00:02.307 > git read-tree -mu HEAD # timeout=10 00:00:02.322 > git checkout -f 53a1a621557260e3fbfd1fd32ee65ff11a804d5b # timeout=5 00:00:02.339 Commit message: "packer: Merge irdmafedora into main fedora image" 00:00:02.339 > git rev-list --no-walk 53a1a621557260e3fbfd1fd32ee65ff11a804d5b # timeout=10 00:00:02.519 [Pipeline] Start of Pipeline 00:00:02.530 [Pipeline] library 00:00:02.531 Loading library shm_lib@master 00:00:02.531 Library shm_lib@master is cached. Copying from home. 00:00:02.545 [Pipeline] node 00:00:02.556 Running on VM-host-WFP7 in /var/jenkins/workspace/raid-vg-autotest 00:00:02.558 [Pipeline] { 00:00:02.566 [Pipeline] catchError 00:00:02.568 [Pipeline] { 00:00:02.580 [Pipeline] wrap 00:00:02.589 [Pipeline] { 00:00:02.596 [Pipeline] stage 00:00:02.598 [Pipeline] { (Prologue) 00:00:02.615 [Pipeline] echo 00:00:02.616 Node: VM-host-WFP7 00:00:02.622 [Pipeline] cleanWs 00:00:02.632 [WS-CLEANUP] Deleting project workspace... 00:00:02.632 [WS-CLEANUP] Deferred wipeout is used... 00:00:02.638 [WS-CLEANUP] done 00:00:02.829 [Pipeline] setCustomBuildProperty 00:00:02.930 [Pipeline] httpRequest 00:00:03.313 [Pipeline] echo 00:00:03.315 Sorcerer 10.211.164.101 is alive 00:00:03.324 [Pipeline] retry 00:00:03.326 [Pipeline] { 00:00:03.336 [Pipeline] httpRequest 00:00:03.341 HttpMethod: GET 00:00:03.342 URL: http://10.211.164.101/packages/jbp_53a1a621557260e3fbfd1fd32ee65ff11a804d5b.tar.gz 00:00:03.342 Sending request to url: http://10.211.164.101/packages/jbp_53a1a621557260e3fbfd1fd32ee65ff11a804d5b.tar.gz 00:00:03.343 Response Code: HTTP/1.1 200 OK 00:00:03.343 Success: Status code 200 is in the accepted range: 200,404 00:00:03.344 Saving response body to /var/jenkins/workspace/raid-vg-autotest/jbp_53a1a621557260e3fbfd1fd32ee65ff11a804d5b.tar.gz 00:00:03.489 [Pipeline] } 00:00:03.504 [Pipeline] // retry 00:00:03.511 [Pipeline] sh 00:00:03.795 + tar --no-same-owner -xf jbp_53a1a621557260e3fbfd1fd32ee65ff11a804d5b.tar.gz 00:00:03.811 [Pipeline] httpRequest 00:00:04.191 [Pipeline] echo 00:00:04.192 Sorcerer 10.211.164.101 is alive 00:00:04.200 [Pipeline] retry 00:00:04.202 [Pipeline] { 00:00:04.214 [Pipeline] httpRequest 00:00:04.219 HttpMethod: GET 00:00:04.220 URL: http://10.211.164.101/packages/spdk_09cc66129742c68eb8ce46c42225a27c3c933a14.tar.gz 00:00:04.220 Sending request to url: http://10.211.164.101/packages/spdk_09cc66129742c68eb8ce46c42225a27c3c933a14.tar.gz 00:00:04.221 Response Code: HTTP/1.1 200 OK 00:00:04.221 Success: Status code 200 is in the accepted range: 200,404 00:00:04.222 Saving response body to /var/jenkins/workspace/raid-vg-autotest/spdk_09cc66129742c68eb8ce46c42225a27c3c933a14.tar.gz 00:00:18.865 [Pipeline] } 00:00:18.886 [Pipeline] // retry 00:00:18.895 [Pipeline] sh 00:00:19.183 + tar --no-same-owner -xf spdk_09cc66129742c68eb8ce46c42225a27c3c933a14.tar.gz 00:00:21.736 [Pipeline] sh 00:00:22.023 + git -C spdk log --oneline -n5 00:00:22.023 09cc66129 test/unit: add mixed busy/idle mock poller function in reactor_ut 00:00:22.023 a67b3561a dpdk: update submodule to include alarm_cancel fix 00:00:22.023 43f6d3385 nvmf: remove use of STAILQ for last_wqe events 00:00:22.023 9645421c5 nvmf: rename nvmf_rdma_qpair_process_ibv_event() 00:00:22.023 e6da32ee1 nvmf: rename nvmf_rdma_send_qpair_async_event() 00:00:22.047 [Pipeline] withCredentials 00:00:22.060 > git --version # timeout=10 00:00:22.073 > git --version # 'git version 2.39.2' 00:00:22.092 Masking supported pattern matches of $GIT_PASSWORD or $GIT_ASKPASS 00:00:22.094 [Pipeline] { 00:00:22.103 [Pipeline] retry 00:00:22.104 [Pipeline] { 00:00:22.122 [Pipeline] sh 00:00:22.408 + git ls-remote http://dpdk.org/git/dpdk-stable v23.11 00:00:22.990 [Pipeline] } 00:00:23.010 [Pipeline] // retry 00:00:23.015 [Pipeline] } 00:00:23.034 [Pipeline] // withCredentials 00:00:23.043 [Pipeline] httpRequest 00:00:23.481 [Pipeline] echo 00:00:23.482 Sorcerer 10.211.164.101 is alive 00:00:23.492 [Pipeline] retry 00:00:23.493 [Pipeline] { 00:00:23.509 [Pipeline] httpRequest 00:00:23.514 HttpMethod: GET 00:00:23.515 URL: http://10.211.164.101/packages/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:00:23.515 Sending request to url: http://10.211.164.101/packages/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:00:23.532 Response Code: HTTP/1.1 200 OK 00:00:23.532 Success: Status code 200 is in the accepted range: 200,404 00:00:23.533 Saving response body to /var/jenkins/workspace/raid-vg-autotest/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:01:15.006 [Pipeline] } 00:01:15.023 [Pipeline] // retry 00:01:15.031 [Pipeline] sh 00:01:15.318 + tar --no-same-owner -xf dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:01:16.714 [Pipeline] sh 00:01:17.000 + git -C dpdk log --oneline -n5 00:01:17.000 eeb0605f11 version: 23.11.0 00:01:17.000 238778122a doc: update release notes for 23.11 00:01:17.000 46aa6b3cfc doc: fix description of RSS features 00:01:17.000 dd88f51a57 devtools: forbid DPDK API in cnxk base driver 00:01:17.000 7e421ae345 devtools: support skipping forbid rule check 00:01:17.019 [Pipeline] writeFile 00:01:17.033 [Pipeline] sh 00:01:17.320 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:01:17.335 [Pipeline] sh 00:01:17.624 + cat autorun-spdk.conf 00:01:17.624 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:17.624 SPDK_RUN_ASAN=1 00:01:17.624 SPDK_RUN_UBSAN=1 00:01:17.624 SPDK_TEST_RAID=1 00:01:17.624 SPDK_TEST_NATIVE_DPDK=v23.11 00:01:17.624 SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:01:17.624 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:17.632 RUN_NIGHTLY=1 00:01:17.634 [Pipeline] } 00:01:17.647 [Pipeline] // stage 00:01:17.663 [Pipeline] stage 00:01:17.665 [Pipeline] { (Run VM) 00:01:17.677 [Pipeline] sh 00:01:17.964 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:01:17.964 + echo 'Start stage prepare_nvme.sh' 00:01:17.964 Start stage prepare_nvme.sh 00:01:17.964 + [[ -n 5 ]] 00:01:17.964 + disk_prefix=ex5 00:01:17.964 + [[ -n /var/jenkins/workspace/raid-vg-autotest ]] 00:01:17.964 + [[ -e /var/jenkins/workspace/raid-vg-autotest/autorun-spdk.conf ]] 00:01:17.964 + source /var/jenkins/workspace/raid-vg-autotest/autorun-spdk.conf 00:01:17.964 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:17.964 ++ SPDK_RUN_ASAN=1 00:01:17.964 ++ SPDK_RUN_UBSAN=1 00:01:17.964 ++ SPDK_TEST_RAID=1 00:01:17.964 ++ SPDK_TEST_NATIVE_DPDK=v23.11 00:01:17.964 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:01:17.964 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:17.964 ++ RUN_NIGHTLY=1 00:01:17.964 + cd /var/jenkins/workspace/raid-vg-autotest 00:01:17.964 + nvme_files=() 00:01:17.964 + declare -A nvme_files 00:01:17.964 + backend_dir=/var/lib/libvirt/images/backends 00:01:17.964 + nvme_files['nvme.img']=5G 00:01:17.964 + nvme_files['nvme-cmb.img']=5G 00:01:17.964 + nvme_files['nvme-multi0.img']=4G 00:01:17.964 + nvme_files['nvme-multi1.img']=4G 00:01:17.964 + nvme_files['nvme-multi2.img']=4G 00:01:17.964 + nvme_files['nvme-openstack.img']=8G 00:01:17.964 + nvme_files['nvme-zns.img']=5G 00:01:17.964 + (( SPDK_TEST_NVME_PMR == 1 )) 00:01:17.964 + (( SPDK_TEST_FTL == 1 )) 00:01:17.964 + (( SPDK_TEST_NVME_FDP == 1 )) 00:01:17.964 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:01:17.964 + for nvme in "${!nvme_files[@]}" 00:01:17.964 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-multi2.img -s 4G 00:01:17.964 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:01:17.964 + for nvme in "${!nvme_files[@]}" 00:01:17.964 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-cmb.img -s 5G 00:01:17.964 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:01:17.964 + for nvme in "${!nvme_files[@]}" 00:01:17.964 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-openstack.img -s 8G 00:01:17.964 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:01:17.964 + for nvme in "${!nvme_files[@]}" 00:01:17.964 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-zns.img -s 5G 00:01:17.964 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:01:17.964 + for nvme in "${!nvme_files[@]}" 00:01:17.964 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-multi1.img -s 4G 00:01:17.964 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:01:17.964 + for nvme in "${!nvme_files[@]}" 00:01:17.964 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-multi0.img -s 4G 00:01:17.964 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:01:17.964 + for nvme in "${!nvme_files[@]}" 00:01:17.964 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme.img -s 5G 00:01:18.225 Formatting '/var/lib/libvirt/images/backends/ex5-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:01:18.225 ++ sudo grep -rl ex5-nvme.img /etc/libvirt/qemu 00:01:18.225 + echo 'End stage prepare_nvme.sh' 00:01:18.225 End stage prepare_nvme.sh 00:01:18.239 [Pipeline] sh 00:01:18.528 + DISTRO=fedora39 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:01:18.529 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 -b /var/lib/libvirt/images/backends/ex5-nvme.img -b /var/lib/libvirt/images/backends/ex5-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex5-nvme-multi1.img:/var/lib/libvirt/images/backends/ex5-nvme-multi2.img -H -a -v -f fedora39 00:01:18.529 00:01:18.529 DIR=/var/jenkins/workspace/raid-vg-autotest/spdk/scripts/vagrant 00:01:18.529 SPDK_DIR=/var/jenkins/workspace/raid-vg-autotest/spdk 00:01:18.529 VAGRANT_TARGET=/var/jenkins/workspace/raid-vg-autotest 00:01:18.529 HELP=0 00:01:18.529 DRY_RUN=0 00:01:18.529 NVME_FILE=/var/lib/libvirt/images/backends/ex5-nvme.img,/var/lib/libvirt/images/backends/ex5-nvme-multi0.img, 00:01:18.529 NVME_DISKS_TYPE=nvme,nvme, 00:01:18.529 NVME_AUTO_CREATE=0 00:01:18.529 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex5-nvme-multi1.img:/var/lib/libvirt/images/backends/ex5-nvme-multi2.img, 00:01:18.529 NVME_CMB=,, 00:01:18.529 NVME_PMR=,, 00:01:18.529 NVME_ZNS=,, 00:01:18.529 NVME_MS=,, 00:01:18.529 NVME_FDP=,, 00:01:18.529 SPDK_VAGRANT_DISTRO=fedora39 00:01:18.529 SPDK_VAGRANT_VMCPU=10 00:01:18.529 SPDK_VAGRANT_VMRAM=12288 00:01:18.529 SPDK_VAGRANT_PROVIDER=libvirt 00:01:18.529 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:01:18.529 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:01:18.529 SPDK_OPENSTACK_NETWORK=0 00:01:18.529 VAGRANT_PACKAGE_BOX=0 00:01:18.529 VAGRANTFILE=/var/jenkins/workspace/raid-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:01:18.529 FORCE_DISTRO=true 00:01:18.529 VAGRANT_BOX_VERSION= 00:01:18.529 EXTRA_VAGRANTFILES= 00:01:18.529 NIC_MODEL=virtio 00:01:18.529 00:01:18.529 mkdir: created directory '/var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt' 00:01:18.529 /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt /var/jenkins/workspace/raid-vg-autotest 00:01:20.439 Bringing machine 'default' up with 'libvirt' provider... 00:01:20.699 ==> default: Creating image (snapshot of base box volume). 00:01:20.960 ==> default: Creating domain with the following settings... 00:01:20.960 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1727738340_07d5a075dba746c88421 00:01:20.960 ==> default: -- Domain type: kvm 00:01:20.960 ==> default: -- Cpus: 10 00:01:20.960 ==> default: -- Feature: acpi 00:01:20.960 ==> default: -- Feature: apic 00:01:20.960 ==> default: -- Feature: pae 00:01:20.960 ==> default: -- Memory: 12288M 00:01:20.960 ==> default: -- Memory Backing: hugepages: 00:01:20.960 ==> default: -- Management MAC: 00:01:20.960 ==> default: -- Loader: 00:01:20.960 ==> default: -- Nvram: 00:01:20.960 ==> default: -- Base box: spdk/fedora39 00:01:20.960 ==> default: -- Storage pool: default 00:01:20.960 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1727738340_07d5a075dba746c88421.img (20G) 00:01:20.960 ==> default: -- Volume Cache: default 00:01:20.960 ==> default: -- Kernel: 00:01:20.960 ==> default: -- Initrd: 00:01:20.960 ==> default: -- Graphics Type: vnc 00:01:20.960 ==> default: -- Graphics Port: -1 00:01:20.960 ==> default: -- Graphics IP: 127.0.0.1 00:01:20.960 ==> default: -- Graphics Password: Not defined 00:01:20.960 ==> default: -- Video Type: cirrus 00:01:20.960 ==> default: -- Video VRAM: 9216 00:01:20.960 ==> default: -- Sound Type: 00:01:20.960 ==> default: -- Keymap: en-us 00:01:20.960 ==> default: -- TPM Path: 00:01:20.960 ==> default: -- INPUT: type=mouse, bus=ps2 00:01:20.960 ==> default: -- Command line args: 00:01:20.960 ==> default: -> value=-device, 00:01:20.960 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:01:20.960 ==> default: -> value=-drive, 00:01:20.960 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme.img,if=none,id=nvme-0-drive0, 00:01:20.960 ==> default: -> value=-device, 00:01:20.960 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:20.960 ==> default: -> value=-device, 00:01:20.960 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:01:20.960 ==> default: -> value=-drive, 00:01:20.960 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:01:20.960 ==> default: -> value=-device, 00:01:20.960 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:20.960 ==> default: -> value=-drive, 00:01:20.960 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:01:20.960 ==> default: -> value=-device, 00:01:20.960 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:20.960 ==> default: -> value=-drive, 00:01:20.960 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:01:20.960 ==> default: -> value=-device, 00:01:20.960 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:20.960 ==> default: Creating shared folders metadata... 00:01:20.960 ==> default: Starting domain. 00:01:22.873 ==> default: Waiting for domain to get an IP address... 00:01:40.982 ==> default: Waiting for SSH to become available... 00:01:40.982 ==> default: Configuring and enabling network interfaces... 00:01:46.268 default: SSH address: 192.168.121.91:22 00:01:46.268 default: SSH username: vagrant 00:01:46.268 default: SSH auth method: private key 00:01:48.820 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/raid-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:01:55.435 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/raid-vg-autotest/dpdk/ => /home/vagrant/spdk_repo/dpdk 00:02:02.014 ==> default: Mounting SSHFS shared folder... 00:02:04.555 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/raid-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:02:04.555 ==> default: Checking Mount.. 00:02:05.937 ==> default: Folder Successfully Mounted! 00:02:05.937 ==> default: Running provisioner: file... 00:02:07.340 default: ~/.gitconfig => .gitconfig 00:02:07.600 00:02:07.600 SUCCESS! 00:02:07.600 00:02:07.600 cd to /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:02:07.600 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:02:07.600 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:02:07.600 00:02:07.609 [Pipeline] } 00:02:07.625 [Pipeline] // stage 00:02:07.636 [Pipeline] dir 00:02:07.636 Running in /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt 00:02:07.638 [Pipeline] { 00:02:07.652 [Pipeline] catchError 00:02:07.653 [Pipeline] { 00:02:07.666 [Pipeline] sh 00:02:07.950 + vagrant ssh-config --host vagrant 00:02:07.950 + sed -ne /^Host/,$p 00:02:07.950 + tee ssh_conf 00:02:10.492 Host vagrant 00:02:10.492 HostName 192.168.121.91 00:02:10.492 User vagrant 00:02:10.492 Port 22 00:02:10.492 UserKnownHostsFile /dev/null 00:02:10.492 StrictHostKeyChecking no 00:02:10.492 PasswordAuthentication no 00:02:10.492 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:02:10.492 IdentitiesOnly yes 00:02:10.492 LogLevel FATAL 00:02:10.492 ForwardAgent yes 00:02:10.492 ForwardX11 yes 00:02:10.492 00:02:10.507 [Pipeline] withEnv 00:02:10.509 [Pipeline] { 00:02:10.522 [Pipeline] sh 00:02:10.805 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:02:10.805 source /etc/os-release 00:02:10.805 [[ -e /image.version ]] && img=$(< /image.version) 00:02:10.805 # Minimal, systemd-like check. 00:02:10.805 if [[ -e /.dockerenv ]]; then 00:02:10.805 # Clear garbage from the node's name: 00:02:10.805 # agt-er_autotest_547-896 -> autotest_547-896 00:02:10.805 # $HOSTNAME is the actual container id 00:02:10.805 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:02:10.805 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:02:10.805 # We can assume this is a mount from a host where container is running, 00:02:10.805 # so fetch its hostname to easily identify the target swarm worker. 00:02:10.805 container="$(< /etc/hostname) ($agent)" 00:02:10.805 else 00:02:10.805 # Fallback 00:02:10.805 container=$agent 00:02:10.805 fi 00:02:10.805 fi 00:02:10.805 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:02:10.805 00:02:11.077 [Pipeline] } 00:02:11.094 [Pipeline] // withEnv 00:02:11.103 [Pipeline] setCustomBuildProperty 00:02:11.116 [Pipeline] stage 00:02:11.118 [Pipeline] { (Tests) 00:02:11.133 [Pipeline] sh 00:02:11.416 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:02:11.689 [Pipeline] sh 00:02:11.971 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:02:12.247 [Pipeline] timeout 00:02:12.247 Timeout set to expire in 1 hr 30 min 00:02:12.249 [Pipeline] { 00:02:12.263 [Pipeline] sh 00:02:12.544 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:02:13.113 HEAD is now at 09cc66129 test/unit: add mixed busy/idle mock poller function in reactor_ut 00:02:13.125 [Pipeline] sh 00:02:13.406 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:02:13.679 [Pipeline] sh 00:02:13.962 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:02:14.240 [Pipeline] sh 00:02:14.524 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=raid-vg-autotest ./autoruner.sh spdk_repo 00:02:14.784 ++ readlink -f spdk_repo 00:02:14.784 + DIR_ROOT=/home/vagrant/spdk_repo 00:02:14.784 + [[ -n /home/vagrant/spdk_repo ]] 00:02:14.784 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:02:14.784 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:02:14.784 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:02:14.784 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:02:14.784 + [[ -d /home/vagrant/spdk_repo/output ]] 00:02:14.784 + [[ raid-vg-autotest == pkgdep-* ]] 00:02:14.784 + cd /home/vagrant/spdk_repo 00:02:14.784 + source /etc/os-release 00:02:14.784 ++ NAME='Fedora Linux' 00:02:14.784 ++ VERSION='39 (Cloud Edition)' 00:02:14.784 ++ ID=fedora 00:02:14.784 ++ VERSION_ID=39 00:02:14.784 ++ VERSION_CODENAME= 00:02:14.784 ++ PLATFORM_ID=platform:f39 00:02:14.784 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:02:14.784 ++ ANSI_COLOR='0;38;2;60;110;180' 00:02:14.784 ++ LOGO=fedora-logo-icon 00:02:14.784 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:02:14.784 ++ HOME_URL=https://fedoraproject.org/ 00:02:14.784 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:02:14.784 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:02:14.784 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:02:14.784 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:02:14.784 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:02:14.784 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:02:14.784 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:02:14.784 ++ SUPPORT_END=2024-11-12 00:02:14.784 ++ VARIANT='Cloud Edition' 00:02:14.784 ++ VARIANT_ID=cloud 00:02:14.784 + uname -a 00:02:14.784 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:02:14.784 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:02:15.354 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:02:15.354 Hugepages 00:02:15.354 node hugesize free / total 00:02:15.354 node0 1048576kB 0 / 0 00:02:15.354 node0 2048kB 0 / 0 00:02:15.354 00:02:15.354 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:15.354 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:02:15.354 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:02:15.354 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:02:15.354 + rm -f /tmp/spdk-ld-path 00:02:15.354 + source autorun-spdk.conf 00:02:15.354 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:15.354 ++ SPDK_RUN_ASAN=1 00:02:15.354 ++ SPDK_RUN_UBSAN=1 00:02:15.354 ++ SPDK_TEST_RAID=1 00:02:15.354 ++ SPDK_TEST_NATIVE_DPDK=v23.11 00:02:15.354 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:02:15.354 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:15.354 ++ RUN_NIGHTLY=1 00:02:15.354 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:02:15.354 + [[ -n '' ]] 00:02:15.354 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:02:15.354 + for M in /var/spdk/build-*-manifest.txt 00:02:15.354 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:02:15.354 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:15.354 + for M in /var/spdk/build-*-manifest.txt 00:02:15.354 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:02:15.354 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:15.354 + for M in /var/spdk/build-*-manifest.txt 00:02:15.354 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:02:15.354 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:15.354 ++ uname 00:02:15.354 + [[ Linux == \L\i\n\u\x ]] 00:02:15.354 + sudo dmesg -T 00:02:15.614 + sudo dmesg --clear 00:02:15.614 + dmesg_pid=6161 00:02:15.614 + sudo dmesg -Tw 00:02:15.614 + [[ Fedora Linux == FreeBSD ]] 00:02:15.614 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:15.614 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:15.614 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:02:15.614 + [[ -x /usr/src/fio-static/fio ]] 00:02:15.614 + export FIO_BIN=/usr/src/fio-static/fio 00:02:15.614 + FIO_BIN=/usr/src/fio-static/fio 00:02:15.614 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:02:15.614 + [[ ! -v VFIO_QEMU_BIN ]] 00:02:15.614 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:02:15.614 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:15.614 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:15.614 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:02:15.614 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:15.614 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:15.614 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:15.614 Test configuration: 00:02:15.614 SPDK_RUN_FUNCTIONAL_TEST=1 00:02:15.614 SPDK_RUN_ASAN=1 00:02:15.614 SPDK_RUN_UBSAN=1 00:02:15.614 SPDK_TEST_RAID=1 00:02:15.614 SPDK_TEST_NATIVE_DPDK=v23.11 00:02:15.614 SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:02:15.614 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:15.614 RUN_NIGHTLY=1 23:19:55 -- common/autotest_common.sh@1680 -- $ [[ n == y ]] 00:02:15.614 23:19:55 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:02:15.614 23:19:55 -- scripts/common.sh@15 -- $ shopt -s extglob 00:02:15.614 23:19:55 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:02:15.614 23:19:55 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:15.614 23:19:55 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:15.614 23:19:55 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:15.614 23:19:55 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:15.614 23:19:55 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:15.614 23:19:55 -- paths/export.sh@5 -- $ export PATH 00:02:15.615 23:19:55 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:15.615 23:19:55 -- common/autobuild_common.sh@478 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:02:15.615 23:19:55 -- common/autobuild_common.sh@479 -- $ date +%s 00:02:15.615 23:19:55 -- common/autobuild_common.sh@479 -- $ mktemp -dt spdk_1727738395.XXXXXX 00:02:15.615 23:19:55 -- common/autobuild_common.sh@479 -- $ SPDK_WORKSPACE=/tmp/spdk_1727738395.W3kUGh 00:02:15.615 23:19:55 -- common/autobuild_common.sh@481 -- $ [[ -n '' ]] 00:02:15.615 23:19:55 -- common/autobuild_common.sh@485 -- $ '[' -n v23.11 ']' 00:02:15.615 23:19:55 -- common/autobuild_common.sh@486 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:02:15.615 23:19:55 -- common/autobuild_common.sh@486 -- $ scanbuild_exclude=' --exclude /home/vagrant/spdk_repo/dpdk' 00:02:15.615 23:19:55 -- common/autobuild_common.sh@492 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:02:15.615 23:19:55 -- common/autobuild_common.sh@494 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/dpdk --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:02:15.615 23:19:55 -- common/autobuild_common.sh@495 -- $ get_config_params 00:02:15.615 23:19:55 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:02:15.615 23:19:55 -- common/autotest_common.sh@10 -- $ set +x 00:02:15.615 23:19:55 -- common/autobuild_common.sh@495 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-raid5f --with-dpdk=/home/vagrant/spdk_repo/dpdk/build' 00:02:15.615 23:19:55 -- common/autobuild_common.sh@497 -- $ start_monitor_resources 00:02:15.615 23:19:55 -- pm/common@17 -- $ local monitor 00:02:15.615 23:19:55 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:15.615 23:19:55 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:15.615 23:19:55 -- pm/common@25 -- $ sleep 1 00:02:15.615 23:19:55 -- pm/common@21 -- $ date +%s 00:02:15.615 23:19:55 -- pm/common@21 -- $ date +%s 00:02:15.615 23:19:55 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1727738395 00:02:15.615 23:19:55 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1727738395 00:02:15.875 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1727738395_collect-cpu-load.pm.log 00:02:15.875 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1727738395_collect-vmstat.pm.log 00:02:16.845 23:19:56 -- common/autobuild_common.sh@498 -- $ trap stop_monitor_resources EXIT 00:02:16.845 23:19:56 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:02:16.845 23:19:56 -- spdk/autobuild.sh@12 -- $ umask 022 00:02:16.845 23:19:56 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:02:16.845 23:19:56 -- spdk/autobuild.sh@16 -- $ date -u 00:02:16.845 Mon Sep 30 11:19:56 PM UTC 2024 00:02:16.845 23:19:56 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:02:16.845 v25.01-pre-17-g09cc66129 00:02:16.845 23:19:56 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:02:16.845 23:19:56 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:02:16.845 23:19:56 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:02:16.845 23:19:56 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:02:16.845 23:19:56 -- common/autotest_common.sh@10 -- $ set +x 00:02:16.845 ************************************ 00:02:16.845 START TEST asan 00:02:16.845 ************************************ 00:02:16.845 using asan 00:02:16.845 23:19:56 asan -- common/autotest_common.sh@1125 -- $ echo 'using asan' 00:02:16.845 00:02:16.845 real 0m0.001s 00:02:16.845 user 0m0.000s 00:02:16.845 sys 0m0.000s 00:02:16.845 23:19:56 asan -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:02:16.845 23:19:56 asan -- common/autotest_common.sh@10 -- $ set +x 00:02:16.845 ************************************ 00:02:16.845 END TEST asan 00:02:16.845 ************************************ 00:02:16.845 23:19:56 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:02:16.845 23:19:56 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:02:16.845 23:19:56 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:02:16.845 23:19:56 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:02:16.845 23:19:56 -- common/autotest_common.sh@10 -- $ set +x 00:02:16.845 ************************************ 00:02:16.845 START TEST ubsan 00:02:16.845 ************************************ 00:02:16.845 using ubsan 00:02:16.845 23:19:56 ubsan -- common/autotest_common.sh@1125 -- $ echo 'using ubsan' 00:02:16.845 00:02:16.845 real 0m0.000s 00:02:16.845 user 0m0.000s 00:02:16.845 sys 0m0.000s 00:02:16.845 23:19:56 ubsan -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:02:16.845 23:19:56 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:02:16.845 ************************************ 00:02:16.845 END TEST ubsan 00:02:16.845 ************************************ 00:02:16.845 23:19:56 -- spdk/autobuild.sh@27 -- $ '[' -n v23.11 ']' 00:02:16.845 23:19:56 -- spdk/autobuild.sh@28 -- $ build_native_dpdk 00:02:16.845 23:19:56 -- common/autobuild_common.sh@442 -- $ run_test build_native_dpdk _build_native_dpdk 00:02:16.845 23:19:56 -- common/autotest_common.sh@1101 -- $ '[' 2 -le 1 ']' 00:02:16.845 23:19:56 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:02:16.845 23:19:56 -- common/autotest_common.sh@10 -- $ set +x 00:02:16.845 ************************************ 00:02:16.845 START TEST build_native_dpdk 00:02:16.845 ************************************ 00:02:16.845 23:19:56 build_native_dpdk -- common/autotest_common.sh@1125 -- $ _build_native_dpdk 00:02:16.845 23:19:56 build_native_dpdk -- common/autobuild_common.sh@48 -- $ local external_dpdk_dir 00:02:16.845 23:19:56 build_native_dpdk -- common/autobuild_common.sh@49 -- $ local external_dpdk_base_dir 00:02:16.845 23:19:56 build_native_dpdk -- common/autobuild_common.sh@50 -- $ local compiler_version 00:02:16.845 23:19:56 build_native_dpdk -- common/autobuild_common.sh@51 -- $ local compiler 00:02:16.845 23:19:56 build_native_dpdk -- common/autobuild_common.sh@52 -- $ local dpdk_kmods 00:02:16.845 23:19:56 build_native_dpdk -- common/autobuild_common.sh@53 -- $ local repo=dpdk 00:02:16.845 23:19:56 build_native_dpdk -- common/autobuild_common.sh@55 -- $ compiler=gcc 00:02:16.845 23:19:56 build_native_dpdk -- common/autobuild_common.sh@61 -- $ export CC=gcc 00:02:16.845 23:19:56 build_native_dpdk -- common/autobuild_common.sh@61 -- $ CC=gcc 00:02:16.845 23:19:56 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *clang* ]] 00:02:16.845 23:19:56 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *gcc* ]] 00:02:16.845 23:19:56 build_native_dpdk -- common/autobuild_common.sh@68 -- $ gcc -dumpversion 00:02:16.845 23:19:56 build_native_dpdk -- common/autobuild_common.sh@68 -- $ compiler_version=13 00:02:16.845 23:19:56 build_native_dpdk -- common/autobuild_common.sh@69 -- $ compiler_version=13 00:02:16.845 23:19:56 build_native_dpdk -- common/autobuild_common.sh@70 -- $ external_dpdk_dir=/home/vagrant/spdk_repo/dpdk/build 00:02:16.845 23:19:56 build_native_dpdk -- common/autobuild_common.sh@71 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:02:16.845 23:19:56 build_native_dpdk -- common/autobuild_common.sh@71 -- $ external_dpdk_base_dir=/home/vagrant/spdk_repo/dpdk 00:02:16.845 23:19:56 build_native_dpdk -- common/autobuild_common.sh@73 -- $ [[ ! -d /home/vagrant/spdk_repo/dpdk ]] 00:02:16.845 23:19:56 build_native_dpdk -- common/autobuild_common.sh@82 -- $ orgdir=/home/vagrant/spdk_repo/spdk 00:02:16.845 23:19:56 build_native_dpdk -- common/autobuild_common.sh@83 -- $ git -C /home/vagrant/spdk_repo/dpdk log --oneline -n 5 00:02:16.845 eeb0605f11 version: 23.11.0 00:02:16.845 238778122a doc: update release notes for 23.11 00:02:16.845 46aa6b3cfc doc: fix description of RSS features 00:02:16.845 dd88f51a57 devtools: forbid DPDK API in cnxk base driver 00:02:16.845 7e421ae345 devtools: support skipping forbid rule check 00:02:16.845 23:19:56 build_native_dpdk -- common/autobuild_common.sh@85 -- $ dpdk_cflags='-fPIC -g -fcommon' 00:02:16.845 23:19:56 build_native_dpdk -- common/autobuild_common.sh@86 -- $ dpdk_ldflags= 00:02:16.845 23:19:56 build_native_dpdk -- common/autobuild_common.sh@87 -- $ dpdk_ver=23.11.0 00:02:16.845 23:19:56 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ gcc == *gcc* ]] 00:02:16.845 23:19:56 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ 13 -ge 5 ]] 00:02:16.845 23:19:56 build_native_dpdk -- common/autobuild_common.sh@90 -- $ dpdk_cflags+=' -Werror' 00:02:16.845 23:19:56 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ gcc == *gcc* ]] 00:02:16.845 23:19:56 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ 13 -ge 10 ]] 00:02:16.845 23:19:56 build_native_dpdk -- common/autobuild_common.sh@94 -- $ dpdk_cflags+=' -Wno-stringop-overflow' 00:02:16.845 23:19:56 build_native_dpdk -- common/autobuild_common.sh@100 -- $ DPDK_DRIVERS=("bus" "bus/pci" "bus/vdev" "mempool/ring" "net/i40e" "net/i40e/base") 00:02:16.845 23:19:56 build_native_dpdk -- common/autobuild_common.sh@102 -- $ local mlx5_libs_added=n 00:02:16.845 23:19:56 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:02:16.845 23:19:56 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:02:16.845 23:19:56 build_native_dpdk -- common/autobuild_common.sh@139 -- $ [[ 0 -eq 1 ]] 00:02:16.845 23:19:56 build_native_dpdk -- common/autobuild_common.sh@167 -- $ cd /home/vagrant/spdk_repo/dpdk 00:02:16.845 23:19:56 build_native_dpdk -- common/autobuild_common.sh@168 -- $ uname -s 00:02:16.845 23:19:56 build_native_dpdk -- common/autobuild_common.sh@168 -- $ '[' Linux = Linux ']' 00:02:16.845 23:19:56 build_native_dpdk -- common/autobuild_common.sh@169 -- $ lt 23.11.0 21.11.0 00:02:16.845 23:19:56 build_native_dpdk -- scripts/common.sh@373 -- $ cmp_versions 23.11.0 '<' 21.11.0 00:02:16.845 23:19:56 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:02:16.845 23:19:56 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:02:16.845 23:19:56 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:02:16.845 23:19:56 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:02:16.845 23:19:56 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:02:16.845 23:19:56 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:02:17.106 23:19:56 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=<' 00:02:17.106 23:19:56 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3 00:02:17.106 23:19:56 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:02:17.106 23:19:56 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:02:17.106 23:19:56 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:02:17.106 23:19:56 build_native_dpdk -- scripts/common.sh@345 -- $ : 1 00:02:17.106 23:19:56 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:02:17.106 23:19:56 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:17.106 23:19:56 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 23 00:02:17.106 23:19:56 build_native_dpdk -- scripts/common.sh@353 -- $ local d=23 00:02:17.106 23:19:56 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 23 =~ ^[0-9]+$ ]] 00:02:17.106 23:19:56 build_native_dpdk -- scripts/common.sh@355 -- $ echo 23 00:02:17.106 23:19:56 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=23 00:02:17.106 23:19:56 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 21 00:02:17.106 23:19:56 build_native_dpdk -- scripts/common.sh@353 -- $ local d=21 00:02:17.106 23:19:56 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 21 =~ ^[0-9]+$ ]] 00:02:17.106 23:19:56 build_native_dpdk -- scripts/common.sh@355 -- $ echo 21 00:02:17.106 23:19:56 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=21 00:02:17.106 23:19:56 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:02:17.106 23:19:56 build_native_dpdk -- scripts/common.sh@367 -- $ return 1 00:02:17.106 23:19:56 build_native_dpdk -- common/autobuild_common.sh@173 -- $ patch -p1 00:02:17.106 patching file config/rte_config.h 00:02:17.106 Hunk #1 succeeded at 60 (offset 1 line). 00:02:17.106 23:19:56 build_native_dpdk -- common/autobuild_common.sh@176 -- $ lt 23.11.0 24.07.0 00:02:17.106 23:19:56 build_native_dpdk -- scripts/common.sh@373 -- $ cmp_versions 23.11.0 '<' 24.07.0 00:02:17.106 23:19:56 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:02:17.106 23:19:56 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:02:17.106 23:19:56 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:02:17.106 23:19:56 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:02:17.106 23:19:56 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:02:17.106 23:19:56 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:02:17.106 23:19:56 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=<' 00:02:17.106 23:19:56 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3 00:02:17.106 23:19:56 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:02:17.106 23:19:56 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:02:17.106 23:19:56 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:02:17.106 23:19:56 build_native_dpdk -- scripts/common.sh@345 -- $ : 1 00:02:17.106 23:19:56 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:02:17.106 23:19:56 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:17.106 23:19:56 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 23 00:02:17.106 23:19:56 build_native_dpdk -- scripts/common.sh@353 -- $ local d=23 00:02:17.106 23:19:56 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 23 =~ ^[0-9]+$ ]] 00:02:17.106 23:19:56 build_native_dpdk -- scripts/common.sh@355 -- $ echo 23 00:02:17.106 23:19:56 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=23 00:02:17.106 23:19:56 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 24 00:02:17.106 23:19:56 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24 00:02:17.106 23:19:56 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:02:17.106 23:19:56 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24 00:02:17.106 23:19:56 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=24 00:02:17.106 23:19:56 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:02:17.106 23:19:56 build_native_dpdk -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:02:17.106 23:19:56 build_native_dpdk -- scripts/common.sh@368 -- $ return 0 00:02:17.106 23:19:56 build_native_dpdk -- common/autobuild_common.sh@177 -- $ patch -p1 00:02:17.106 patching file lib/pcapng/rte_pcapng.c 00:02:17.106 23:19:56 build_native_dpdk -- common/autobuild_common.sh@179 -- $ ge 23.11.0 24.07.0 00:02:17.106 23:19:56 build_native_dpdk -- scripts/common.sh@376 -- $ cmp_versions 23.11.0 '>=' 24.07.0 00:02:17.106 23:19:56 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:02:17.106 23:19:56 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:02:17.106 23:19:56 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:02:17.106 23:19:56 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:02:17.106 23:19:56 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:02:17.106 23:19:56 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:02:17.106 23:19:56 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=>=' 00:02:17.106 23:19:56 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3 00:02:17.106 23:19:56 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:02:17.106 23:19:56 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:02:17.106 23:19:56 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:02:17.106 23:19:56 build_native_dpdk -- scripts/common.sh@348 -- $ : 1 00:02:17.106 23:19:56 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:02:17.106 23:19:56 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:17.106 23:19:56 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 23 00:02:17.106 23:19:56 build_native_dpdk -- scripts/common.sh@353 -- $ local d=23 00:02:17.106 23:19:56 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 23 =~ ^[0-9]+$ ]] 00:02:17.106 23:19:56 build_native_dpdk -- scripts/common.sh@355 -- $ echo 23 00:02:17.106 23:19:56 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=23 00:02:17.106 23:19:56 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 24 00:02:17.106 23:19:56 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24 00:02:17.106 23:19:56 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:02:17.106 23:19:56 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24 00:02:17.106 23:19:56 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=24 00:02:17.106 23:19:56 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:02:17.106 23:19:56 build_native_dpdk -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:02:17.106 23:19:56 build_native_dpdk -- scripts/common.sh@368 -- $ return 1 00:02:17.106 23:19:56 build_native_dpdk -- common/autobuild_common.sh@183 -- $ dpdk_kmods=false 00:02:17.106 23:19:56 build_native_dpdk -- common/autobuild_common.sh@184 -- $ uname -s 00:02:17.106 23:19:56 build_native_dpdk -- common/autobuild_common.sh@184 -- $ '[' Linux = FreeBSD ']' 00:02:17.106 23:19:56 build_native_dpdk -- common/autobuild_common.sh@188 -- $ printf %s, bus bus/pci bus/vdev mempool/ring net/i40e net/i40e/base 00:02:17.106 23:19:56 build_native_dpdk -- common/autobuild_common.sh@188 -- $ meson build-tmp --prefix=/home/vagrant/spdk_repo/dpdk/build --libdir lib -Denable_docs=false -Denable_kmods=false -Dtests=false -Dc_link_args= '-Dc_args=-fPIC -g -fcommon -Werror -Wno-stringop-overflow' -Dmachine=native -Denable_drivers=bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:02:23.684 The Meson build system 00:02:23.684 Version: 1.5.0 00:02:23.684 Source dir: /home/vagrant/spdk_repo/dpdk 00:02:23.684 Build dir: /home/vagrant/spdk_repo/dpdk/build-tmp 00:02:23.684 Build type: native build 00:02:23.684 Program cat found: YES (/usr/bin/cat) 00:02:23.684 Project name: DPDK 00:02:23.684 Project version: 23.11.0 00:02:23.684 C compiler for the host machine: gcc (gcc 13.3.1 "gcc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:23.684 C linker for the host machine: gcc ld.bfd 2.40-14 00:02:23.684 Host machine cpu family: x86_64 00:02:23.684 Host machine cpu: x86_64 00:02:23.684 Message: ## Building in Developer Mode ## 00:02:23.684 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:23.684 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/check-symbols.sh) 00:02:23.684 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/options-ibverbs-static.sh) 00:02:23.684 Program python3 found: YES (/usr/bin/python3) 00:02:23.684 Program cat found: YES (/usr/bin/cat) 00:02:23.684 config/meson.build:113: WARNING: The "machine" option is deprecated. Please use "cpu_instruction_set" instead. 00:02:23.684 Compiler for C supports arguments -march=native: YES 00:02:23.684 Checking for size of "void *" : 8 00:02:23.684 Checking for size of "void *" : 8 (cached) 00:02:23.684 Library m found: YES 00:02:23.684 Library numa found: YES 00:02:23.684 Has header "numaif.h" : YES 00:02:23.684 Library fdt found: NO 00:02:23.684 Library execinfo found: NO 00:02:23.684 Has header "execinfo.h" : YES 00:02:23.684 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:23.684 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:23.684 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:23.684 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:23.684 Run-time dependency openssl found: YES 3.1.1 00:02:23.684 Run-time dependency libpcap found: YES 1.10.4 00:02:23.684 Has header "pcap.h" with dependency libpcap: YES 00:02:23.684 Compiler for C supports arguments -Wcast-qual: YES 00:02:23.684 Compiler for C supports arguments -Wdeprecated: YES 00:02:23.684 Compiler for C supports arguments -Wformat: YES 00:02:23.684 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:23.684 Compiler for C supports arguments -Wformat-security: NO 00:02:23.684 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:23.684 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:23.684 Compiler for C supports arguments -Wnested-externs: YES 00:02:23.684 Compiler for C supports arguments -Wold-style-definition: YES 00:02:23.684 Compiler for C supports arguments -Wpointer-arith: YES 00:02:23.684 Compiler for C supports arguments -Wsign-compare: YES 00:02:23.684 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:23.684 Compiler for C supports arguments -Wundef: YES 00:02:23.684 Compiler for C supports arguments -Wwrite-strings: YES 00:02:23.684 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:23.684 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:23.684 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:23.684 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:23.684 Program objdump found: YES (/usr/bin/objdump) 00:02:23.684 Compiler for C supports arguments -mavx512f: YES 00:02:23.684 Checking if "AVX512 checking" compiles: YES 00:02:23.684 Fetching value of define "__SSE4_2__" : 1 00:02:23.684 Fetching value of define "__AES__" : 1 00:02:23.684 Fetching value of define "__AVX__" : 1 00:02:23.684 Fetching value of define "__AVX2__" : 1 00:02:23.684 Fetching value of define "__AVX512BW__" : 1 00:02:23.684 Fetching value of define "__AVX512CD__" : 1 00:02:23.684 Fetching value of define "__AVX512DQ__" : 1 00:02:23.684 Fetching value of define "__AVX512F__" : 1 00:02:23.684 Fetching value of define "__AVX512VL__" : 1 00:02:23.684 Fetching value of define "__PCLMUL__" : 1 00:02:23.684 Fetching value of define "__RDRND__" : 1 00:02:23.684 Fetching value of define "__RDSEED__" : 1 00:02:23.684 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:23.685 Fetching value of define "__znver1__" : (undefined) 00:02:23.685 Fetching value of define "__znver2__" : (undefined) 00:02:23.685 Fetching value of define "__znver3__" : (undefined) 00:02:23.685 Fetching value of define "__znver4__" : (undefined) 00:02:23.685 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:23.685 Message: lib/log: Defining dependency "log" 00:02:23.685 Message: lib/kvargs: Defining dependency "kvargs" 00:02:23.685 Message: lib/telemetry: Defining dependency "telemetry" 00:02:23.685 Checking for function "getentropy" : NO 00:02:23.685 Message: lib/eal: Defining dependency "eal" 00:02:23.685 Message: lib/ring: Defining dependency "ring" 00:02:23.685 Message: lib/rcu: Defining dependency "rcu" 00:02:23.685 Message: lib/mempool: Defining dependency "mempool" 00:02:23.685 Message: lib/mbuf: Defining dependency "mbuf" 00:02:23.685 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:23.685 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:23.685 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:23.685 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:23.685 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:23.685 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:02:23.685 Compiler for C supports arguments -mpclmul: YES 00:02:23.685 Compiler for C supports arguments -maes: YES 00:02:23.685 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:23.685 Compiler for C supports arguments -mavx512bw: YES 00:02:23.685 Compiler for C supports arguments -mavx512dq: YES 00:02:23.685 Compiler for C supports arguments -mavx512vl: YES 00:02:23.685 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:23.685 Compiler for C supports arguments -mavx2: YES 00:02:23.685 Compiler for C supports arguments -mavx: YES 00:02:23.685 Message: lib/net: Defining dependency "net" 00:02:23.685 Message: lib/meter: Defining dependency "meter" 00:02:23.685 Message: lib/ethdev: Defining dependency "ethdev" 00:02:23.685 Message: lib/pci: Defining dependency "pci" 00:02:23.685 Message: lib/cmdline: Defining dependency "cmdline" 00:02:23.685 Message: lib/metrics: Defining dependency "metrics" 00:02:23.685 Message: lib/hash: Defining dependency "hash" 00:02:23.685 Message: lib/timer: Defining dependency "timer" 00:02:23.685 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:23.685 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:23.685 Fetching value of define "__AVX512CD__" : 1 (cached) 00:02:23.685 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:23.685 Message: lib/acl: Defining dependency "acl" 00:02:23.685 Message: lib/bbdev: Defining dependency "bbdev" 00:02:23.685 Message: lib/bitratestats: Defining dependency "bitratestats" 00:02:23.685 Run-time dependency libelf found: YES 0.191 00:02:23.685 Message: lib/bpf: Defining dependency "bpf" 00:02:23.685 Message: lib/cfgfile: Defining dependency "cfgfile" 00:02:23.685 Message: lib/compressdev: Defining dependency "compressdev" 00:02:23.685 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:23.685 Message: lib/distributor: Defining dependency "distributor" 00:02:23.685 Message: lib/dmadev: Defining dependency "dmadev" 00:02:23.685 Message: lib/efd: Defining dependency "efd" 00:02:23.685 Message: lib/eventdev: Defining dependency "eventdev" 00:02:23.685 Message: lib/dispatcher: Defining dependency "dispatcher" 00:02:23.685 Message: lib/gpudev: Defining dependency "gpudev" 00:02:23.685 Message: lib/gro: Defining dependency "gro" 00:02:23.685 Message: lib/gso: Defining dependency "gso" 00:02:23.685 Message: lib/ip_frag: Defining dependency "ip_frag" 00:02:23.685 Message: lib/jobstats: Defining dependency "jobstats" 00:02:23.685 Message: lib/latencystats: Defining dependency "latencystats" 00:02:23.685 Message: lib/lpm: Defining dependency "lpm" 00:02:23.685 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:23.685 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:23.685 Fetching value of define "__AVX512IFMA__" : (undefined) 00:02:23.685 Compiler for C supports arguments -mavx512f -mavx512dq -mavx512ifma: YES 00:02:23.685 Message: lib/member: Defining dependency "member" 00:02:23.685 Message: lib/pcapng: Defining dependency "pcapng" 00:02:23.685 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:23.685 Message: lib/power: Defining dependency "power" 00:02:23.685 Message: lib/rawdev: Defining dependency "rawdev" 00:02:23.685 Message: lib/regexdev: Defining dependency "regexdev" 00:02:23.685 Message: lib/mldev: Defining dependency "mldev" 00:02:23.685 Message: lib/rib: Defining dependency "rib" 00:02:23.685 Message: lib/reorder: Defining dependency "reorder" 00:02:23.685 Message: lib/sched: Defining dependency "sched" 00:02:23.685 Message: lib/security: Defining dependency "security" 00:02:23.685 Message: lib/stack: Defining dependency "stack" 00:02:23.685 Has header "linux/userfaultfd.h" : YES 00:02:23.685 Has header "linux/vduse.h" : YES 00:02:23.685 Message: lib/vhost: Defining dependency "vhost" 00:02:23.685 Message: lib/ipsec: Defining dependency "ipsec" 00:02:23.685 Message: lib/pdcp: Defining dependency "pdcp" 00:02:23.685 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:23.685 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:23.685 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:23.685 Message: lib/fib: Defining dependency "fib" 00:02:23.685 Message: lib/port: Defining dependency "port" 00:02:23.685 Message: lib/pdump: Defining dependency "pdump" 00:02:23.685 Message: lib/table: Defining dependency "table" 00:02:23.685 Message: lib/pipeline: Defining dependency "pipeline" 00:02:23.685 Message: lib/graph: Defining dependency "graph" 00:02:23.685 Message: lib/node: Defining dependency "node" 00:02:23.685 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:23.685 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:23.685 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:24.255 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:24.255 Compiler for C supports arguments -Wno-sign-compare: YES 00:02:24.255 Compiler for C supports arguments -Wno-unused-value: YES 00:02:24.255 Compiler for C supports arguments -Wno-format: YES 00:02:24.255 Compiler for C supports arguments -Wno-format-security: YES 00:02:24.255 Compiler for C supports arguments -Wno-format-nonliteral: YES 00:02:24.255 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:02:24.255 Compiler for C supports arguments -Wno-unused-but-set-variable: YES 00:02:24.255 Compiler for C supports arguments -Wno-unused-parameter: YES 00:02:24.255 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:24.255 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:24.255 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:24.255 Compiler for C supports arguments -mavx512bw: YES (cached) 00:02:24.255 Compiler for C supports arguments -march=skylake-avx512: YES 00:02:24.255 Message: drivers/net/i40e: Defining dependency "net_i40e" 00:02:24.255 Has header "sys/epoll.h" : YES 00:02:24.255 Program doxygen found: YES (/usr/local/bin/doxygen) 00:02:24.255 Configuring doxy-api-html.conf using configuration 00:02:24.255 Configuring doxy-api-man.conf using configuration 00:02:24.255 Program mandb found: YES (/usr/bin/mandb) 00:02:24.255 Program sphinx-build found: NO 00:02:24.255 Configuring rte_build_config.h using configuration 00:02:24.255 Message: 00:02:24.255 ================= 00:02:24.255 Applications Enabled 00:02:24.255 ================= 00:02:24.255 00:02:24.255 apps: 00:02:24.255 dumpcap, graph, pdump, proc-info, test-acl, test-bbdev, test-cmdline, test-compress-perf, 00:02:24.255 test-crypto-perf, test-dma-perf, test-eventdev, test-fib, test-flow-perf, test-gpudev, test-mldev, test-pipeline, 00:02:24.255 test-pmd, test-regex, test-sad, test-security-perf, 00:02:24.255 00:02:24.255 Message: 00:02:24.255 ================= 00:02:24.255 Libraries Enabled 00:02:24.255 ================= 00:02:24.255 00:02:24.255 libs: 00:02:24.255 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:24.255 net, meter, ethdev, pci, cmdline, metrics, hash, timer, 00:02:24.255 acl, bbdev, bitratestats, bpf, cfgfile, compressdev, cryptodev, distributor, 00:02:24.255 dmadev, efd, eventdev, dispatcher, gpudev, gro, gso, ip_frag, 00:02:24.255 jobstats, latencystats, lpm, member, pcapng, power, rawdev, regexdev, 00:02:24.255 mldev, rib, reorder, sched, security, stack, vhost, ipsec, 00:02:24.255 pdcp, fib, port, pdump, table, pipeline, graph, node, 00:02:24.255 00:02:24.255 00:02:24.255 Message: 00:02:24.255 =============== 00:02:24.255 Drivers Enabled 00:02:24.255 =============== 00:02:24.255 00:02:24.255 common: 00:02:24.255 00:02:24.255 bus: 00:02:24.255 pci, vdev, 00:02:24.255 mempool: 00:02:24.255 ring, 00:02:24.255 dma: 00:02:24.255 00:02:24.255 net: 00:02:24.255 i40e, 00:02:24.255 raw: 00:02:24.255 00:02:24.255 crypto: 00:02:24.255 00:02:24.255 compress: 00:02:24.255 00:02:24.255 regex: 00:02:24.255 00:02:24.255 ml: 00:02:24.255 00:02:24.255 vdpa: 00:02:24.255 00:02:24.255 event: 00:02:24.255 00:02:24.255 baseband: 00:02:24.255 00:02:24.255 gpu: 00:02:24.255 00:02:24.255 00:02:24.255 Message: 00:02:24.255 ================= 00:02:24.255 Content Skipped 00:02:24.255 ================= 00:02:24.255 00:02:24.255 apps: 00:02:24.255 00:02:24.255 libs: 00:02:24.255 00:02:24.255 drivers: 00:02:24.255 common/cpt: not in enabled drivers build config 00:02:24.255 common/dpaax: not in enabled drivers build config 00:02:24.255 common/iavf: not in enabled drivers build config 00:02:24.255 common/idpf: not in enabled drivers build config 00:02:24.255 common/mvep: not in enabled drivers build config 00:02:24.255 common/octeontx: not in enabled drivers build config 00:02:24.255 bus/auxiliary: not in enabled drivers build config 00:02:24.255 bus/cdx: not in enabled drivers build config 00:02:24.255 bus/dpaa: not in enabled drivers build config 00:02:24.255 bus/fslmc: not in enabled drivers build config 00:02:24.255 bus/ifpga: not in enabled drivers build config 00:02:24.255 bus/platform: not in enabled drivers build config 00:02:24.255 bus/vmbus: not in enabled drivers build config 00:02:24.255 common/cnxk: not in enabled drivers build config 00:02:24.255 common/mlx5: not in enabled drivers build config 00:02:24.255 common/nfp: not in enabled drivers build config 00:02:24.255 common/qat: not in enabled drivers build config 00:02:24.255 common/sfc_efx: not in enabled drivers build config 00:02:24.255 mempool/bucket: not in enabled drivers build config 00:02:24.255 mempool/cnxk: not in enabled drivers build config 00:02:24.255 mempool/dpaa: not in enabled drivers build config 00:02:24.255 mempool/dpaa2: not in enabled drivers build config 00:02:24.255 mempool/octeontx: not in enabled drivers build config 00:02:24.255 mempool/stack: not in enabled drivers build config 00:02:24.255 dma/cnxk: not in enabled drivers build config 00:02:24.255 dma/dpaa: not in enabled drivers build config 00:02:24.255 dma/dpaa2: not in enabled drivers build config 00:02:24.255 dma/hisilicon: not in enabled drivers build config 00:02:24.255 dma/idxd: not in enabled drivers build config 00:02:24.255 dma/ioat: not in enabled drivers build config 00:02:24.255 dma/skeleton: not in enabled drivers build config 00:02:24.255 net/af_packet: not in enabled drivers build config 00:02:24.255 net/af_xdp: not in enabled drivers build config 00:02:24.255 net/ark: not in enabled drivers build config 00:02:24.255 net/atlantic: not in enabled drivers build config 00:02:24.255 net/avp: not in enabled drivers build config 00:02:24.255 net/axgbe: not in enabled drivers build config 00:02:24.255 net/bnx2x: not in enabled drivers build config 00:02:24.255 net/bnxt: not in enabled drivers build config 00:02:24.255 net/bonding: not in enabled drivers build config 00:02:24.255 net/cnxk: not in enabled drivers build config 00:02:24.255 net/cpfl: not in enabled drivers build config 00:02:24.255 net/cxgbe: not in enabled drivers build config 00:02:24.255 net/dpaa: not in enabled drivers build config 00:02:24.255 net/dpaa2: not in enabled drivers build config 00:02:24.255 net/e1000: not in enabled drivers build config 00:02:24.255 net/ena: not in enabled drivers build config 00:02:24.255 net/enetc: not in enabled drivers build config 00:02:24.255 net/enetfec: not in enabled drivers build config 00:02:24.255 net/enic: not in enabled drivers build config 00:02:24.255 net/failsafe: not in enabled drivers build config 00:02:24.255 net/fm10k: not in enabled drivers build config 00:02:24.255 net/gve: not in enabled drivers build config 00:02:24.255 net/hinic: not in enabled drivers build config 00:02:24.255 net/hns3: not in enabled drivers build config 00:02:24.255 net/iavf: not in enabled drivers build config 00:02:24.255 net/ice: not in enabled drivers build config 00:02:24.255 net/idpf: not in enabled drivers build config 00:02:24.255 net/igc: not in enabled drivers build config 00:02:24.255 net/ionic: not in enabled drivers build config 00:02:24.255 net/ipn3ke: not in enabled drivers build config 00:02:24.255 net/ixgbe: not in enabled drivers build config 00:02:24.255 net/mana: not in enabled drivers build config 00:02:24.255 net/memif: not in enabled drivers build config 00:02:24.255 net/mlx4: not in enabled drivers build config 00:02:24.255 net/mlx5: not in enabled drivers build config 00:02:24.255 net/mvneta: not in enabled drivers build config 00:02:24.255 net/mvpp2: not in enabled drivers build config 00:02:24.255 net/netvsc: not in enabled drivers build config 00:02:24.255 net/nfb: not in enabled drivers build config 00:02:24.256 net/nfp: not in enabled drivers build config 00:02:24.256 net/ngbe: not in enabled drivers build config 00:02:24.256 net/null: not in enabled drivers build config 00:02:24.256 net/octeontx: not in enabled drivers build config 00:02:24.256 net/octeon_ep: not in enabled drivers build config 00:02:24.256 net/pcap: not in enabled drivers build config 00:02:24.256 net/pfe: not in enabled drivers build config 00:02:24.256 net/qede: not in enabled drivers build config 00:02:24.256 net/ring: not in enabled drivers build config 00:02:24.256 net/sfc: not in enabled drivers build config 00:02:24.256 net/softnic: not in enabled drivers build config 00:02:24.256 net/tap: not in enabled drivers build config 00:02:24.256 net/thunderx: not in enabled drivers build config 00:02:24.256 net/txgbe: not in enabled drivers build config 00:02:24.256 net/vdev_netvsc: not in enabled drivers build config 00:02:24.256 net/vhost: not in enabled drivers build config 00:02:24.256 net/virtio: not in enabled drivers build config 00:02:24.256 net/vmxnet3: not in enabled drivers build config 00:02:24.256 raw/cnxk_bphy: not in enabled drivers build config 00:02:24.256 raw/cnxk_gpio: not in enabled drivers build config 00:02:24.256 raw/dpaa2_cmdif: not in enabled drivers build config 00:02:24.256 raw/ifpga: not in enabled drivers build config 00:02:24.256 raw/ntb: not in enabled drivers build config 00:02:24.256 raw/skeleton: not in enabled drivers build config 00:02:24.256 crypto/armv8: not in enabled drivers build config 00:02:24.256 crypto/bcmfs: not in enabled drivers build config 00:02:24.256 crypto/caam_jr: not in enabled drivers build config 00:02:24.256 crypto/ccp: not in enabled drivers build config 00:02:24.256 crypto/cnxk: not in enabled drivers build config 00:02:24.256 crypto/dpaa_sec: not in enabled drivers build config 00:02:24.256 crypto/dpaa2_sec: not in enabled drivers build config 00:02:24.256 crypto/ipsec_mb: not in enabled drivers build config 00:02:24.256 crypto/mlx5: not in enabled drivers build config 00:02:24.256 crypto/mvsam: not in enabled drivers build config 00:02:24.256 crypto/nitrox: not in enabled drivers build config 00:02:24.256 crypto/null: not in enabled drivers build config 00:02:24.256 crypto/octeontx: not in enabled drivers build config 00:02:24.256 crypto/openssl: not in enabled drivers build config 00:02:24.256 crypto/scheduler: not in enabled drivers build config 00:02:24.256 crypto/uadk: not in enabled drivers build config 00:02:24.256 crypto/virtio: not in enabled drivers build config 00:02:24.256 compress/isal: not in enabled drivers build config 00:02:24.256 compress/mlx5: not in enabled drivers build config 00:02:24.256 compress/octeontx: not in enabled drivers build config 00:02:24.256 compress/zlib: not in enabled drivers build config 00:02:24.256 regex/mlx5: not in enabled drivers build config 00:02:24.256 regex/cn9k: not in enabled drivers build config 00:02:24.256 ml/cnxk: not in enabled drivers build config 00:02:24.256 vdpa/ifc: not in enabled drivers build config 00:02:24.256 vdpa/mlx5: not in enabled drivers build config 00:02:24.256 vdpa/nfp: not in enabled drivers build config 00:02:24.256 vdpa/sfc: not in enabled drivers build config 00:02:24.256 event/cnxk: not in enabled drivers build config 00:02:24.256 event/dlb2: not in enabled drivers build config 00:02:24.256 event/dpaa: not in enabled drivers build config 00:02:24.256 event/dpaa2: not in enabled drivers build config 00:02:24.256 event/dsw: not in enabled drivers build config 00:02:24.256 event/opdl: not in enabled drivers build config 00:02:24.256 event/skeleton: not in enabled drivers build config 00:02:24.256 event/sw: not in enabled drivers build config 00:02:24.256 event/octeontx: not in enabled drivers build config 00:02:24.256 baseband/acc: not in enabled drivers build config 00:02:24.256 baseband/fpga_5gnr_fec: not in enabled drivers build config 00:02:24.256 baseband/fpga_lte_fec: not in enabled drivers build config 00:02:24.256 baseband/la12xx: not in enabled drivers build config 00:02:24.256 baseband/null: not in enabled drivers build config 00:02:24.256 baseband/turbo_sw: not in enabled drivers build config 00:02:24.256 gpu/cuda: not in enabled drivers build config 00:02:24.256 00:02:24.256 00:02:24.256 Build targets in project: 217 00:02:24.256 00:02:24.256 DPDK 23.11.0 00:02:24.256 00:02:24.256 User defined options 00:02:24.256 libdir : lib 00:02:24.256 prefix : /home/vagrant/spdk_repo/dpdk/build 00:02:24.256 c_args : -fPIC -g -fcommon -Werror -Wno-stringop-overflow 00:02:24.256 c_link_args : 00:02:24.256 enable_docs : false 00:02:24.256 enable_drivers: bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:02:24.256 enable_kmods : false 00:02:24.256 machine : native 00:02:24.256 tests : false 00:02:24.256 00:02:24.256 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:24.256 WARNING: Running the setup command as `meson [options]` instead of `meson setup [options]` is ambiguous and deprecated. 00:02:24.256 23:20:04 build_native_dpdk -- common/autobuild_common.sh@192 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 00:02:24.256 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:02:24.515 [1/707] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:24.515 [2/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:24.515 [3/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:24.515 [4/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:24.515 [5/707] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:24.515 [6/707] Linking static target lib/librte_kvargs.a 00:02:24.515 [7/707] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:24.515 [8/707] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:24.515 [9/707] Linking static target lib/librte_log.a 00:02:24.515 [10/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:24.773 [11/707] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.773 [12/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:24.773 [13/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:24.773 [14/707] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:24.773 [15/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:24.773 [16/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:25.032 [17/707] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.032 [18/707] Linking target lib/librte_log.so.24.0 00:02:25.032 [19/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:25.032 [20/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:25.032 [21/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:25.032 [22/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:25.032 [23/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:25.032 [24/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:25.290 [25/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:25.290 [26/707] Generating symbol file lib/librte_log.so.24.0.p/librte_log.so.24.0.symbols 00:02:25.290 [27/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:25.290 [28/707] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:25.290 [29/707] Linking target lib/librte_kvargs.so.24.0 00:02:25.290 [30/707] Linking static target lib/librte_telemetry.a 00:02:25.290 [31/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:25.290 [32/707] Generating symbol file lib/librte_kvargs.so.24.0.p/librte_kvargs.so.24.0.symbols 00:02:25.290 [33/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:25.549 [34/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:25.549 [35/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:25.549 [36/707] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:25.549 [37/707] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:25.549 [38/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:25.549 [39/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:25.549 [40/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:25.549 [41/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:25.549 [42/707] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.549 [43/707] Linking target lib/librte_telemetry.so.24.0 00:02:25.808 [44/707] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:25.808 [45/707] Generating symbol file lib/librte_telemetry.so.24.0.p/librte_telemetry.so.24.0.symbols 00:02:25.808 [46/707] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:25.808 [47/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:25.808 [48/707] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:26.066 [49/707] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:26.066 [50/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:26.066 [51/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:26.066 [52/707] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:26.066 [53/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:26.067 [54/707] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:26.067 [55/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:26.067 [56/707] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:26.067 [57/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:26.067 [58/707] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:26.325 [59/707] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:26.325 [60/707] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:26.325 [61/707] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:26.325 [62/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:26.325 [63/707] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:26.325 [64/707] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:26.325 [65/707] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:26.325 [66/707] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:26.325 [67/707] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:26.325 [68/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:26.584 [69/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:26.584 [70/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:26.584 [71/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:26.584 [72/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:26.584 [73/707] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:26.584 [74/707] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:26.584 [75/707] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:26.584 [76/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:26.584 [77/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:26.843 [78/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:26.843 [79/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:26.843 [80/707] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:26.843 [81/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:26.843 [82/707] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:27.101 [83/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:27.101 [84/707] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:27.101 [85/707] Linking static target lib/librte_ring.a 00:02:27.101 [86/707] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:27.101 [87/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:27.101 [88/707] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:27.101 [89/707] Linking static target lib/librte_eal.a 00:02:27.101 [90/707] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:27.360 [91/707] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:27.360 [92/707] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:27.360 [93/707] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:27.360 [94/707] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:27.360 [95/707] Linking static target lib/librte_mempool.a 00:02:27.618 [96/707] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:27.618 [97/707] Linking static target lib/librte_rcu.a 00:02:27.618 [98/707] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:27.618 [99/707] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:27.618 [100/707] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:27.618 [101/707] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:27.618 [102/707] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:27.618 [103/707] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:27.618 [104/707] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:27.876 [105/707] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:27.877 [106/707] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:27.877 [107/707] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:27.877 [108/707] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:27.877 [109/707] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:27.877 [110/707] Linking static target lib/librte_mbuf.a 00:02:27.877 [111/707] Linking static target lib/librte_meter.a 00:02:27.877 [112/707] Linking static target lib/librte_net.a 00:02:28.135 [113/707] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:28.135 [114/707] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:28.135 [115/707] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:28.135 [116/707] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:28.135 [117/707] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:28.135 [118/707] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:28.393 [119/707] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:28.652 [120/707] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:28.652 [121/707] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:28.911 [122/707] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:28.911 [123/707] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:28.911 [124/707] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:28.911 [125/707] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:28.911 [126/707] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:28.911 [127/707] Linking static target lib/librte_pci.a 00:02:28.911 [128/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:28.911 [129/707] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:28.911 [130/707] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:29.170 [131/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:29.170 [132/707] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:29.170 [133/707] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:29.170 [134/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:29.170 [135/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:29.170 [136/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:29.170 [137/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:29.170 [138/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:29.170 [139/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:29.170 [140/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:29.427 [141/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:29.427 [142/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:29.427 [143/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:29.427 [144/707] Linking static target lib/librte_cmdline.a 00:02:29.427 [145/707] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:29.685 [146/707] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics.c.o 00:02:29.685 [147/707] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics_telemetry.c.o 00:02:29.685 [148/707] Linking static target lib/librte_metrics.a 00:02:29.685 [149/707] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:29.685 [150/707] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:29.944 [151/707] Generating lib/metrics.sym_chk with a custom command (wrapped by meson to capture output) 00:02:29.944 [152/707] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:29.944 [153/707] Linking static target lib/librte_timer.a 00:02:29.944 [154/707] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:29.944 [155/707] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:30.203 [156/707] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:30.203 [157/707] Compiling C object lib/librte_acl.a.p/acl_acl_gen.c.o 00:02:30.462 [158/707] Compiling C object lib/librte_acl.a.p/acl_tb_mem.c.o 00:02:30.462 [159/707] Compiling C object lib/librte_acl.a.p/acl_rte_acl.c.o 00:02:30.462 [160/707] Compiling C object lib/librte_acl.a.p/acl_acl_run_scalar.c.o 00:02:30.721 [161/707] Compiling C object lib/librte_bitratestats.a.p/bitratestats_rte_bitrate.c.o 00:02:30.721 [162/707] Linking static target lib/librte_bitratestats.a 00:02:30.721 [163/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf.c.o 00:02:30.981 [164/707] Compiling C object lib/librte_bbdev.a.p/bbdev_rte_bbdev.c.o 00:02:30.981 [165/707] Generating lib/bitratestats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:30.981 [166/707] Linking static target lib/librte_bbdev.a 00:02:30.981 [167/707] Compiling C object lib/librte_acl.a.p/acl_acl_bld.c.o 00:02:31.240 [168/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_dump.c.o 00:02:31.240 [169/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load.c.o 00:02:31.500 [170/707] Generating lib/bbdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:31.500 [171/707] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:31.500 [172/707] Linking static target lib/librte_hash.a 00:02:31.500 [173/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_exec.c.o 00:02:31.759 [174/707] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:31.759 [175/707] Linking static target lib/librte_ethdev.a 00:02:31.759 [176/707] Compiling C object lib/acl/libavx2_tmp.a.p/acl_run_avx2.c.o 00:02:31.759 [177/707] Linking static target lib/acl/libavx2_tmp.a 00:02:31.759 [178/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_stub.c.o 00:02:31.759 [179/707] Compiling C object lib/librte_acl.a.p/acl_acl_run_sse.c.o 00:02:32.018 [180/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_pkt.c.o 00:02:32.018 [181/707] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:32.018 [182/707] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:32.018 [183/707] Compiling C object lib/librte_cfgfile.a.p/cfgfile_rte_cfgfile.c.o 00:02:32.018 [184/707] Linking static target lib/librte_cfgfile.a 00:02:32.018 [185/707] Linking target lib/librte_eal.so.24.0 00:02:32.018 [186/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load_elf.c.o 00:02:32.018 [187/707] Generating symbol file lib/librte_eal.so.24.0.p/librte_eal.so.24.0.symbols 00:02:32.018 [188/707] Linking target lib/librte_ring.so.24.0 00:02:32.277 [189/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_convert.c.o 00:02:32.277 [190/707] Linking target lib/librte_meter.so.24.0 00:02:32.277 [191/707] Generating symbol file lib/librte_ring.so.24.0.p/librte_ring.so.24.0.symbols 00:02:32.277 [192/707] Generating lib/cfgfile.sym_chk with a custom command (wrapped by meson to capture output) 00:02:32.277 [193/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_validate.c.o 00:02:32.277 [194/707] Linking target lib/librte_rcu.so.24.0 00:02:32.277 [195/707] Generating symbol file lib/librte_meter.so.24.0.p/librte_meter.so.24.0.symbols 00:02:32.278 [196/707] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:32.278 [197/707] Linking target lib/librte_pci.so.24.0 00:02:32.278 [198/707] Linking target lib/librte_mempool.so.24.0 00:02:32.278 [199/707] Linking target lib/librte_timer.so.24.0 00:02:32.278 [200/707] Linking target lib/librte_cfgfile.so.24.0 00:02:32.278 [201/707] Generating symbol file lib/librte_rcu.so.24.0.p/librte_rcu.so.24.0.symbols 00:02:32.278 [202/707] Generating symbol file lib/librte_pci.so.24.0.p/librte_pci.so.24.0.symbols 00:02:32.278 [203/707] Generating symbol file lib/librte_mempool.so.24.0.p/librte_mempool.so.24.0.symbols 00:02:32.537 [204/707] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:32.537 [205/707] Generating symbol file lib/librte_timer.so.24.0.p/librte_timer.so.24.0.symbols 00:02:32.537 [206/707] Linking target lib/librte_mbuf.so.24.0 00:02:32.537 [207/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_jit_x86.c.o 00:02:32.537 [208/707] Linking static target lib/librte_bpf.a 00:02:32.537 [209/707] Generating symbol file lib/librte_mbuf.so.24.0.p/librte_mbuf.so.24.0.symbols 00:02:32.537 [210/707] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:32.537 [211/707] Linking target lib/librte_net.so.24.0 00:02:32.537 [212/707] Linking target lib/librte_bbdev.so.24.0 00:02:32.537 [213/707] Linking static target lib/librte_compressdev.a 00:02:32.796 [214/707] Generating symbol file lib/librte_net.so.24.0.p/librte_net.so.24.0.symbols 00:02:32.796 [215/707] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:32.796 [216/707] Generating lib/bpf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:32.796 [217/707] Compiling C object lib/librte_acl.a.p/acl_acl_run_avx512.c.o 00:02:32.796 [218/707] Linking target lib/librte_cmdline.so.24.0 00:02:32.796 [219/707] Linking static target lib/librte_acl.a 00:02:32.796 [220/707] Linking target lib/librte_hash.so.24.0 00:02:32.796 [221/707] Generating symbol file lib/librte_hash.so.24.0.p/librte_hash.so.24.0.symbols 00:02:32.796 [222/707] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_single.c.o 00:02:32.796 [223/707] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:33.055 [224/707] Generating lib/acl.sym_chk with a custom command (wrapped by meson to capture output) 00:02:33.055 [225/707] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_match_sse.c.o 00:02:33.055 [226/707] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:33.055 [227/707] Linking target lib/librte_acl.so.24.0 00:02:33.055 [228/707] Linking target lib/librte_compressdev.so.24.0 00:02:33.055 [229/707] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor.c.o 00:02:33.055 [230/707] Linking static target lib/librte_distributor.a 00:02:33.055 [231/707] Generating symbol file lib/librte_acl.so.24.0.p/librte_acl.so.24.0.symbols 00:02:33.055 [232/707] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:33.314 [233/707] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_private.c.o 00:02:33.314 [234/707] Generating lib/distributor.sym_chk with a custom command (wrapped by meson to capture output) 00:02:33.314 [235/707] Linking target lib/librte_distributor.so.24.0 00:02:33.314 [236/707] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:33.314 [237/707] Linking static target lib/librte_dmadev.a 00:02:33.574 [238/707] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_trace_points.c.o 00:02:33.574 [239/707] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:33.833 [240/707] Linking target lib/librte_dmadev.so.24.0 00:02:33.833 [241/707] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_ring.c.o 00:02:33.833 [242/707] Generating symbol file lib/librte_dmadev.so.24.0.p/librte_dmadev.so.24.0.symbols 00:02:33.833 [243/707] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_dma_adapter.c.o 00:02:34.093 [244/707] Compiling C object lib/librte_efd.a.p/efd_rte_efd.c.o 00:02:34.093 [245/707] Linking static target lib/librte_efd.a 00:02:34.093 [246/707] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_crypto_adapter.c.o 00:02:34.093 [247/707] Generating lib/efd.sym_chk with a custom command (wrapped by meson to capture output) 00:02:34.093 [248/707] Linking target lib/librte_efd.so.24.0 00:02:34.352 [249/707] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:34.352 [250/707] Linking static target lib/librte_cryptodev.a 00:02:34.352 [251/707] Compiling C object lib/librte_dispatcher.a.p/dispatcher_rte_dispatcher.c.o 00:02:34.352 [252/707] Linking static target lib/librte_dispatcher.a 00:02:34.352 [253/707] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_tx_adapter.c.o 00:02:34.610 [254/707] Compiling C object lib/librte_gpudev.a.p/gpudev_gpudev.c.o 00:02:34.610 [255/707] Linking static target lib/librte_gpudev.a 00:02:34.610 [256/707] Compiling C object lib/librte_gro.a.p/gro_rte_gro.c.o 00:02:34.610 [257/707] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_timer_adapter.c.o 00:02:34.610 [258/707] Compiling C object lib/librte_gro.a.p/gro_gro_tcp4.c.o 00:02:34.869 [259/707] Generating lib/dispatcher.sym_chk with a custom command (wrapped by meson to capture output) 00:02:34.869 [260/707] Compiling C object lib/librte_gro.a.p/gro_gro_tcp6.c.o 00:02:35.128 [261/707] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_eventdev.c.o 00:02:35.128 [262/707] Compiling C object lib/librte_gro.a.p/gro_gro_udp4.c.o 00:02:35.128 [263/707] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_tcp4.c.o 00:02:35.128 [264/707] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_udp4.c.o 00:02:35.128 [265/707] Generating lib/gpudev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:35.128 [266/707] Linking static target lib/librte_gro.a 00:02:35.388 [267/707] Linking target lib/librte_gpudev.so.24.0 00:02:35.388 [268/707] Compiling C object lib/librte_gso.a.p/gso_gso_tcp4.c.o 00:02:35.388 [269/707] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:35.388 [270/707] Compiling C object lib/librte_gso.a.p/gso_gso_common.c.o 00:02:35.388 [271/707] Linking target lib/librte_cryptodev.so.24.0 00:02:35.388 [272/707] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_rx_adapter.c.o 00:02:35.388 [273/707] Linking static target lib/librte_eventdev.a 00:02:35.388 [274/707] Generating lib/gro.sym_chk with a custom command (wrapped by meson to capture output) 00:02:35.388 [275/707] Compiling C object lib/librte_gso.a.p/gso_gso_udp4.c.o 00:02:35.388 [276/707] Generating symbol file lib/librte_cryptodev.so.24.0.p/librte_cryptodev.so.24.0.symbols 00:02:35.647 [277/707] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_tcp4.c.o 00:02:35.647 [278/707] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_udp4.c.o 00:02:35.647 [279/707] Compiling C object lib/librte_gso.a.p/gso_rte_gso.c.o 00:02:35.647 [280/707] Linking static target lib/librte_gso.a 00:02:35.647 [281/707] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:35.647 [282/707] Linking target lib/librte_ethdev.so.24.0 00:02:35.906 [283/707] Generating lib/gso.sym_chk with a custom command (wrapped by meson to capture output) 00:02:35.906 [284/707] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_reassembly.c.o 00:02:35.906 [285/707] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_reassembly.c.o 00:02:35.906 [286/707] Generating symbol file lib/librte_ethdev.so.24.0.p/librte_ethdev.so.24.0.symbols 00:02:35.906 [287/707] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_fragmentation.c.o 00:02:35.906 [288/707] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_fragmentation.c.o 00:02:35.906 [289/707] Linking target lib/librte_metrics.so.24.0 00:02:35.906 [290/707] Linking target lib/librte_gro.so.24.0 00:02:35.906 [291/707] Linking target lib/librte_bpf.so.24.0 00:02:35.906 [292/707] Compiling C object lib/librte_jobstats.a.p/jobstats_rte_jobstats.c.o 00:02:35.906 [293/707] Linking static target lib/librte_jobstats.a 00:02:35.906 [294/707] Linking target lib/librte_gso.so.24.0 00:02:35.906 [295/707] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ip_frag_common.c.o 00:02:35.906 [296/707] Generating symbol file lib/librte_metrics.so.24.0.p/librte_metrics.so.24.0.symbols 00:02:35.906 [297/707] Generating symbol file lib/librte_bpf.so.24.0.p/librte_bpf.so.24.0.symbols 00:02:35.906 [298/707] Linking target lib/librte_bitratestats.so.24.0 00:02:36.165 [299/707] Compiling C object lib/librte_ip_frag.a.p/ip_frag_ip_frag_internal.c.o 00:02:36.165 [300/707] Linking static target lib/librte_ip_frag.a 00:02:36.165 [301/707] Generating lib/jobstats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:36.165 [302/707] Linking target lib/librte_jobstats.so.24.0 00:02:36.165 [303/707] Compiling C object lib/librte_latencystats.a.p/latencystats_rte_latencystats.c.o 00:02:36.424 [304/707] Linking static target lib/librte_latencystats.a 00:02:36.424 [305/707] Generating lib/ip_frag.sym_chk with a custom command (wrapped by meson to capture output) 00:02:36.424 [306/707] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm.c.o 00:02:36.424 [307/707] Linking target lib/librte_ip_frag.so.24.0 00:02:36.424 [308/707] Compiling C object lib/librte_member.a.p/member_rte_member.c.o 00:02:36.424 [309/707] Compiling C object lib/member/libsketch_avx512_tmp.a.p/rte_member_sketch_avx512.c.o 00:02:36.424 [310/707] Linking static target lib/member/libsketch_avx512_tmp.a 00:02:36.424 [311/707] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:36.424 [312/707] Generating symbol file lib/librte_ip_frag.so.24.0.p/librte_ip_frag.so.24.0.symbols 00:02:36.424 [313/707] Generating lib/latencystats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:36.424 [314/707] Linking target lib/librte_latencystats.so.24.0 00:02:36.684 [315/707] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:36.684 [316/707] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:36.684 [317/707] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm6.c.o 00:02:36.684 [318/707] Linking static target lib/librte_lpm.a 00:02:36.942 [319/707] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:36.942 [320/707] Compiling C object lib/librte_member.a.p/member_rte_member_ht.c.o 00:02:36.942 [321/707] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:36.942 [322/707] Compiling C object lib/librte_pcapng.a.p/pcapng_rte_pcapng.c.o 00:02:36.942 [323/707] Generating lib/lpm.sym_chk with a custom command (wrapped by meson to capture output) 00:02:36.942 [324/707] Linking static target lib/librte_pcapng.a 00:02:36.942 [325/707] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:36.942 [326/707] Linking target lib/librte_lpm.so.24.0 00:02:36.942 [327/707] Compiling C object lib/librte_member.a.p/member_rte_member_vbf.c.o 00:02:36.942 [328/707] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:37.201 [329/707] Generating symbol file lib/librte_lpm.so.24.0.p/librte_lpm.so.24.0.symbols 00:02:37.201 [330/707] Generating lib/pcapng.sym_chk with a custom command (wrapped by meson to capture output) 00:02:37.201 [331/707] Generating lib/eventdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:37.201 [332/707] Linking target lib/librte_pcapng.so.24.0 00:02:37.201 [333/707] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:37.201 [334/707] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:37.201 [335/707] Linking target lib/librte_eventdev.so.24.0 00:02:37.201 [336/707] Generating symbol file lib/librte_pcapng.so.24.0.p/librte_pcapng.so.24.0.symbols 00:02:37.460 [337/707] Generating symbol file lib/librte_eventdev.so.24.0.p/librte_eventdev.so.24.0.symbols 00:02:37.460 [338/707] Linking target lib/librte_dispatcher.so.24.0 00:02:37.460 [339/707] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:37.460 [340/707] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev_pmd.c.o 00:02:37.460 [341/707] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:37.460 [342/707] Linking static target lib/librte_power.a 00:02:37.460 [343/707] Compiling C object lib/librte_rawdev.a.p/rawdev_rte_rawdev.c.o 00:02:37.460 [344/707] Compiling C object lib/librte_regexdev.a.p/regexdev_rte_regexdev.c.o 00:02:37.460 [345/707] Linking static target lib/librte_rawdev.a 00:02:37.460 [346/707] Linking static target lib/librte_regexdev.a 00:02:37.460 [347/707] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils.c.o 00:02:37.726 [348/707] Compiling C object lib/librte_member.a.p/member_rte_member_sketch.c.o 00:02:37.726 [349/707] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev.c.o 00:02:37.726 [350/707] Linking static target lib/librte_member.a 00:02:37.726 [351/707] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar_bfloat16.c.o 00:02:37.726 [352/707] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar.c.o 00:02:37.726 [353/707] Linking static target lib/librte_mldev.a 00:02:38.003 [354/707] Generating lib/rawdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:38.003 [355/707] Compiling C object lib/librte_rib.a.p/rib_rte_rib.c.o 00:02:38.003 [356/707] Linking target lib/librte_rawdev.so.24.0 00:02:38.003 [357/707] Generating lib/member.sym_chk with a custom command (wrapped by meson to capture output) 00:02:38.003 [358/707] Linking target lib/librte_member.so.24.0 00:02:38.003 [359/707] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:38.003 [360/707] Compiling C object lib/librte_sched.a.p/sched_rte_approx.c.o 00:02:38.003 [361/707] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:38.003 [362/707] Compiling C object lib/librte_sched.a.p/sched_rte_red.c.o 00:02:38.003 [363/707] Linking static target lib/librte_reorder.a 00:02:38.003 [364/707] Linking target lib/librte_power.so.24.0 00:02:38.279 [365/707] Generating lib/regexdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:38.279 [366/707] Linking target lib/librte_regexdev.so.24.0 00:02:38.279 [367/707] Compiling C object lib/librte_rib.a.p/rib_rte_rib6.c.o 00:02:38.279 [368/707] Linking static target lib/librte_rib.a 00:02:38.279 [369/707] Compiling C object lib/librte_sched.a.p/sched_rte_pie.c.o 00:02:38.279 [370/707] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:38.279 [371/707] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:38.279 [372/707] Compiling C object lib/librte_stack.a.p/stack_rte_stack.c.o 00:02:38.279 [373/707] Linking target lib/librte_reorder.so.24.0 00:02:38.279 [374/707] Compiling C object lib/librte_stack.a.p/stack_rte_stack_std.c.o 00:02:38.279 [375/707] Compiling C object lib/librte_stack.a.p/stack_rte_stack_lf.c.o 00:02:38.538 [376/707] Linking static target lib/librte_stack.a 00:02:38.538 [377/707] Generating symbol file lib/librte_reorder.so.24.0.p/librte_reorder.so.24.0.symbols 00:02:38.538 [378/707] Generating lib/rib.sym_chk with a custom command (wrapped by meson to capture output) 00:02:38.538 [379/707] Generating lib/stack.sym_chk with a custom command (wrapped by meson to capture output) 00:02:38.538 [380/707] Linking target lib/librte_stack.so.24.0 00:02:38.538 [381/707] Linking target lib/librte_rib.so.24.0 00:02:38.538 [382/707] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:38.538 [383/707] Linking static target lib/librte_security.a 00:02:38.797 [384/707] Generating symbol file lib/librte_rib.so.24.0.p/librte_rib.so.24.0.symbols 00:02:38.797 [385/707] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:38.797 [386/707] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:38.797 [387/707] Generating lib/mldev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:38.797 [388/707] Linking target lib/librte_mldev.so.24.0 00:02:39.056 [389/707] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:39.056 [390/707] Linking target lib/librte_security.so.24.0 00:02:39.056 [391/707] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:39.056 [392/707] Generating symbol file lib/librte_security.so.24.0.p/librte_security.so.24.0.symbols 00:02:39.056 [393/707] Compiling C object lib/librte_sched.a.p/sched_rte_sched.c.o 00:02:39.056 [394/707] Linking static target lib/librte_sched.a 00:02:39.316 [395/707] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:39.316 [396/707] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:39.581 [397/707] Generating lib/sched.sym_chk with a custom command (wrapped by meson to capture output) 00:02:39.581 [398/707] Linking target lib/librte_sched.so.24.0 00:02:39.581 [399/707] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:39.581 [400/707] Generating symbol file lib/librte_sched.so.24.0.p/librte_sched.so.24.0.symbols 00:02:39.581 [401/707] Compiling C object lib/librte_ipsec.a.p/ipsec_sa.c.o 00:02:39.581 [402/707] Compiling C object lib/librte_ipsec.a.p/ipsec_ses.c.o 00:02:39.839 [403/707] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:40.098 [404/707] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_telemetry.c.o 00:02:40.098 [405/707] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_ctrl_pdu.c.o 00:02:40.098 [406/707] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_cnt.c.o 00:02:40.098 [407/707] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_crypto.c.o 00:02:40.356 [408/707] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_sad.c.o 00:02:40.356 [409/707] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_reorder.c.o 00:02:40.356 [410/707] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_outb.c.o 00:02:40.356 [411/707] Compiling C object lib/librte_fib.a.p/fib_rte_fib.c.o 00:02:40.356 [412/707] Compiling C object lib/librte_fib.a.p/fib_rte_fib6.c.o 00:02:40.356 [413/707] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_inb.c.o 00:02:40.356 [414/707] Linking static target lib/librte_ipsec.a 00:02:40.615 [415/707] Compiling C object lib/librte_pdcp.a.p/pdcp_rte_pdcp.c.o 00:02:40.874 [416/707] Generating lib/ipsec.sym_chk with a custom command (wrapped by meson to capture output) 00:02:40.874 [417/707] Linking target lib/librte_ipsec.so.24.0 00:02:40.874 [418/707] Compiling C object lib/librte_fib.a.p/fib_dir24_8_avx512.c.o 00:02:40.874 [419/707] Compiling C object lib/librte_fib.a.p/fib_trie_avx512.c.o 00:02:40.874 [420/707] Generating symbol file lib/librte_ipsec.so.24.0.p/librte_ipsec.so.24.0.symbols 00:02:41.133 [421/707] Compiling C object lib/librte_fib.a.p/fib_trie.c.o 00:02:41.133 [422/707] Compiling C object lib/librte_port.a.p/port_rte_port_ethdev.c.o 00:02:41.133 [423/707] Compiling C object lib/librte_fib.a.p/fib_dir24_8.c.o 00:02:41.133 [424/707] Compiling C object lib/librte_port.a.p/port_rte_port_fd.c.o 00:02:41.133 [425/707] Linking static target lib/librte_fib.a 00:02:41.392 [426/707] Compiling C object lib/librte_port.a.p/port_rte_port_frag.c.o 00:02:41.392 [427/707] Compiling C object lib/librte_port.a.p/port_rte_port_ras.c.o 00:02:41.392 [428/707] Compiling C object lib/librte_port.a.p/port_rte_port_sched.c.o 00:02:41.392 [429/707] Generating lib/fib.sym_chk with a custom command (wrapped by meson to capture output) 00:02:41.651 [430/707] Linking target lib/librte_fib.so.24.0 00:02:41.651 [431/707] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_process.c.o 00:02:41.651 [432/707] Linking static target lib/librte_pdcp.a 00:02:41.909 [433/707] Generating lib/pdcp.sym_chk with a custom command (wrapped by meson to capture output) 00:02:41.909 [434/707] Linking target lib/librte_pdcp.so.24.0 00:02:41.909 [435/707] Compiling C object lib/librte_port.a.p/port_rte_port_source_sink.c.o 00:02:41.909 [436/707] Compiling C object lib/librte_port.a.p/port_rte_port_sym_crypto.c.o 00:02:42.168 [437/707] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ethdev.c.o 00:02:42.168 [438/707] Compiling C object lib/librte_port.a.p/port_rte_swx_port_fd.c.o 00:02:42.168 [439/707] Compiling C object lib/librte_table.a.p/table_rte_swx_keycmp.c.o 00:02:42.168 [440/707] Compiling C object lib/librte_port.a.p/port_rte_port_eventdev.c.o 00:02:42.427 [441/707] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ring.c.o 00:02:42.427 [442/707] Compiling C object lib/librte_port.a.p/port_rte_port_ring.c.o 00:02:42.427 [443/707] Compiling C object lib/librte_table.a.p/table_rte_swx_table_wm.c.o 00:02:42.427 [444/707] Compiling C object lib/librte_port.a.p/port_rte_swx_port_source_sink.c.o 00:02:42.427 [445/707] Compiling C object lib/librte_table.a.p/table_rte_swx_table_selector.c.o 00:02:42.427 [446/707] Linking static target lib/librte_port.a 00:02:42.427 [447/707] Compiling C object lib/librte_table.a.p/table_rte_swx_table_learner.c.o 00:02:42.427 [448/707] Compiling C object lib/librte_table.a.p/table_rte_swx_table_em.c.o 00:02:42.686 [449/707] Compiling C object lib/librte_pdump.a.p/pdump_rte_pdump.c.o 00:02:42.686 [450/707] Linking static target lib/librte_pdump.a 00:02:42.945 [451/707] Compiling C object lib/librte_table.a.p/table_rte_table_array.c.o 00:02:42.945 [452/707] Compiling C object lib/librte_table.a.p/table_rte_table_acl.c.o 00:02:42.945 [453/707] Generating lib/port.sym_chk with a custom command (wrapped by meson to capture output) 00:02:42.945 [454/707] Compiling C object lib/librte_table.a.p/table_rte_table_hash_cuckoo.c.o 00:02:42.945 [455/707] Linking target lib/librte_port.so.24.0 00:02:42.945 [456/707] Generating lib/pdump.sym_chk with a custom command (wrapped by meson to capture output) 00:02:42.945 [457/707] Linking target lib/librte_pdump.so.24.0 00:02:42.945 [458/707] Generating symbol file lib/librte_port.so.24.0.p/librte_port.so.24.0.symbols 00:02:43.204 [459/707] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key8.c.o 00:02:43.204 [460/707] Compiling C object lib/librte_table.a.p/table_rte_table_lpm.c.o 00:02:43.204 [461/707] Compiling C object lib/librte_table.a.p/table_rte_table_hash_ext.c.o 00:02:43.463 [462/707] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key16.c.o 00:02:43.463 [463/707] Compiling C object lib/librte_table.a.p/table_rte_table_stub.c.o 00:02:43.463 [464/707] Compiling C object lib/librte_table.a.p/table_rte_table_lpm_ipv6.c.o 00:02:43.721 [465/707] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_port_in_action.c.o 00:02:43.721 [466/707] Compiling C object lib/librte_table.a.p/table_rte_table_hash_lru.c.o 00:02:43.721 [467/707] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key32.c.o 00:02:43.721 [468/707] Linking static target lib/librte_table.a 00:02:43.721 [469/707] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:43.979 [470/707] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_pipeline.c.o 00:02:44.238 [471/707] Compiling C object lib/librte_graph.a.p/graph_node.c.o 00:02:44.238 [472/707] Generating lib/table.sym_chk with a custom command (wrapped by meson to capture output) 00:02:44.238 [473/707] Linking target lib/librte_table.so.24.0 00:02:44.238 [474/707] Compiling C object lib/librte_graph.a.p/graph_graph_ops.c.o 00:02:44.238 [475/707] Compiling C object lib/librte_graph.a.p/graph_graph.c.o 00:02:44.497 [476/707] Generating symbol file lib/librte_table.so.24.0.p/librte_table.so.24.0.symbols 00:02:44.497 [477/707] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ipsec.c.o 00:02:44.497 [478/707] Compiling C object lib/librte_graph.a.p/graph_graph_debug.c.o 00:02:44.756 [479/707] Compiling C object lib/librte_graph.a.p/graph_graph_populate.c.o 00:02:44.756 [480/707] Compiling C object lib/librte_graph.a.p/graph_rte_graph_worker.c.o 00:02:44.756 [481/707] Compiling C object lib/librte_graph.a.p/graph_graph_stats.c.o 00:02:44.756 [482/707] Compiling C object lib/librte_graph.a.p/graph_graph_pcap.c.o 00:02:45.015 [483/707] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ctl.c.o 00:02:45.015 [484/707] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline_spec.c.o 00:02:45.015 [485/707] Compiling C object lib/librte_graph.a.p/graph_rte_graph_model_mcore_dispatch.c.o 00:02:45.015 [486/707] Linking static target lib/librte_graph.a 00:02:45.015 [487/707] Compiling C object lib/librte_node.a.p/node_ethdev_ctrl.c.o 00:02:45.274 [488/707] Compiling C object lib/librte_node.a.p/node_ethdev_tx.c.o 00:02:45.274 [489/707] Compiling C object lib/librte_node.a.p/node_ethdev_rx.c.o 00:02:45.274 [490/707] Compiling C object lib/librte_node.a.p/node_ip4_local.c.o 00:02:45.533 [491/707] Compiling C object lib/librte_node.a.p/node_ip4_reassembly.c.o 00:02:45.793 [492/707] Generating lib/graph.sym_chk with a custom command (wrapped by meson to capture output) 00:02:45.793 [493/707] Linking target lib/librte_graph.so.24.0 00:02:45.793 [494/707] Generating symbol file lib/librte_graph.so.24.0.p/librte_graph.so.24.0.symbols 00:02:45.793 [495/707] Compiling C object lib/librte_node.a.p/node_ip4_lookup.c.o 00:02:45.793 [496/707] Compiling C object lib/librte_node.a.p/node_null.c.o 00:02:46.052 [497/707] Compiling C object lib/librte_node.a.p/node_ip6_lookup.c.o 00:02:46.052 [498/707] Compiling C object lib/librte_node.a.p/node_kernel_rx.c.o 00:02:46.052 [499/707] Compiling C object lib/librte_node.a.p/node_kernel_tx.c.o 00:02:46.310 [500/707] Compiling C object lib/librte_node.a.p/node_log.c.o 00:02:46.310 [501/707] Compiling C object lib/librte_node.a.p/node_ip4_rewrite.c.o 00:02:46.310 [502/707] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:46.310 [503/707] Compiling C object lib/librte_node.a.p/node_pkt_drop.c.o 00:02:46.310 [504/707] Compiling C object lib/librte_node.a.p/node_ip6_rewrite.c.o 00:02:46.569 [505/707] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:46.569 [506/707] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:46.569 [507/707] Compiling C object lib/librte_node.a.p/node_pkt_cls.c.o 00:02:46.569 [508/707] Compiling C object lib/librte_node.a.p/node_udp4_input.c.o 00:02:46.569 [509/707] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:46.569 [510/707] Linking static target lib/librte_node.a 00:02:46.569 [511/707] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:46.828 [512/707] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:46.828 [513/707] Generating lib/node.sym_chk with a custom command (wrapped by meson to capture output) 00:02:46.828 [514/707] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:46.828 [515/707] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:46.828 [516/707] Linking target lib/librte_node.so.24.0 00:02:47.087 [517/707] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:47.087 [518/707] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:47.087 [519/707] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:47.087 [520/707] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:47.087 [521/707] Linking static target drivers/librte_bus_pci.a 00:02:47.087 [522/707] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:47.347 [523/707] Compiling C object drivers/librte_bus_pci.so.24.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:47.347 [524/707] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:47.347 [525/707] Linking static target drivers/librte_bus_vdev.a 00:02:47.347 [526/707] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_diag.c.o 00:02:47.347 [527/707] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_dcb.c.o 00:02:47.347 [528/707] Compiling C object drivers/librte_bus_vdev.so.24.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:47.347 [529/707] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_adminq.c.o 00:02:47.347 [530/707] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:47.347 [531/707] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:47.606 [532/707] Linking target drivers/librte_bus_vdev.so.24.0 00:02:47.606 [533/707] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:47.606 [534/707] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:47.606 [535/707] Linking target drivers/librte_bus_pci.so.24.0 00:02:47.606 [536/707] Generating symbol file drivers/librte_bus_vdev.so.24.0.p/librte_bus_vdev.so.24.0.symbols 00:02:47.606 [537/707] Generating symbol file drivers/librte_bus_pci.so.24.0.p/librte_bus_pci.so.24.0.symbols 00:02:47.606 [538/707] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:47.606 [539/707] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_hmc.c.o 00:02:47.606 [540/707] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:47.606 [541/707] Linking static target drivers/librte_mempool_ring.a 00:02:47.606 [542/707] Compiling C object drivers/librte_mempool_ring.so.24.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:47.606 [543/707] Linking target drivers/librte_mempool_ring.so.24.0 00:02:47.864 [544/707] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_lan_hmc.c.o 00:02:48.124 [545/707] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_nvm.c.o 00:02:48.382 [546/707] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_common.c.o 00:02:48.382 [547/707] Linking static target drivers/net/i40e/base/libi40e_base.a 00:02:48.641 [548/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_pf.c.o 00:02:48.899 [549/707] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline.c.o 00:02:49.157 [550/707] Compiling C object drivers/net/i40e/libi40e_avx512_lib.a.p/i40e_rxtx_vec_avx512.c.o 00:02:49.157 [551/707] Linking static target drivers/net/i40e/libi40e_avx512_lib.a 00:02:49.157 [552/707] Compiling C object drivers/net/i40e/libi40e_avx2_lib.a.p/i40e_rxtx_vec_avx2.c.o 00:02:49.157 [553/707] Linking static target drivers/net/i40e/libi40e_avx2_lib.a 00:02:49.157 [554/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_fdir.c.o 00:02:49.416 [555/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_tm.c.o 00:02:49.416 [556/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_flow.c.o 00:02:49.674 [557/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_vf_representor.c.o 00:02:49.674 [558/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_hash.c.o 00:02:49.674 [559/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_recycle_mbufs_vec_common.c.o 00:02:49.933 [560/707] Compiling C object app/dpdk-graph.p/graph_cli.c.o 00:02:49.933 [561/707] Compiling C object app/dpdk-graph.p/graph_conn.c.o 00:02:49.933 [562/707] Compiling C object app/dpdk-graph.p/graph_ethdev_rx.c.o 00:02:50.190 [563/707] Compiling C object app/dpdk-dumpcap.p/dumpcap_main.c.o 00:02:50.449 [564/707] Compiling C object app/dpdk-graph.p/graph_ip4_route.c.o 00:02:50.449 [565/707] Compiling C object app/dpdk-graph.p/graph_ethdev.c.o 00:02:50.449 [566/707] Compiling C object app/dpdk-graph.p/graph_graph.c.o 00:02:50.449 [567/707] Compiling C object app/dpdk-graph.p/graph_ip6_route.c.o 00:02:50.449 [568/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx.c.o 00:02:50.708 [569/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_rte_pmd_i40e.c.o 00:02:50.708 [570/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_sse.c.o 00:02:50.708 [571/707] Compiling C object app/dpdk-graph.p/graph_l3fwd.c.o 00:02:50.708 [572/707] Compiling C object app/dpdk-graph.p/graph_main.c.o 00:02:50.708 [573/707] Compiling C object app/dpdk-graph.p/graph_mempool.c.o 00:02:50.966 [574/707] Compiling C object app/dpdk-graph.p/graph_utils.c.o 00:02:50.966 [575/707] Compiling C object app/dpdk-graph.p/graph_neigh.c.o 00:02:51.225 [576/707] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_main.c.o 00:02:51.225 [577/707] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_commands.c.o 00:02:51.225 [578/707] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_cmdline_test.c.o 00:02:51.225 [579/707] Compiling C object app/dpdk-test-acl.p/test-acl_main.c.o 00:02:51.484 [580/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_ethdev.c.o 00:02:51.484 [581/707] Linking static target drivers/libtmp_rte_net_i40e.a 00:02:51.484 [582/707] Compiling C object app/dpdk-pdump.p/pdump_main.c.o 00:02:51.484 [583/707] Compiling C object app/dpdk-proc-info.p/proc-info_main.c.o 00:02:51.741 [584/707] Generating drivers/rte_net_i40e.pmd.c with a custom command 00:02:51.741 [585/707] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev.c.o 00:02:51.741 [586/707] Compiling C object drivers/librte_net_i40e.a.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:02:51.741 [587/707] Compiling C object drivers/librte_net_i40e.so.24.0.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:02:51.741 [588/707] Linking static target drivers/librte_net_i40e.a 00:02:51.741 [589/707] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_options_parse.c.o 00:02:51.741 [590/707] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_vector.c.o 00:02:52.307 [591/707] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_main.c.o 00:02:52.307 [592/707] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_throughput.c.o 00:02:52.307 [593/707] Generating drivers/rte_net_i40e.sym_chk with a custom command (wrapped by meson to capture output) 00:02:52.307 [594/707] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_common.c.o 00:02:52.307 [595/707] Linking target drivers/librte_net_i40e.so.24.0 00:02:52.307 [596/707] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_verify.c.o 00:02:52.307 [597/707] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_cyclecount.c.o 00:02:52.565 [598/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_ops.c.o 00:02:52.565 [599/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_common.c.o 00:02:52.859 [600/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_options_parsing.c.o 00:02:52.859 [601/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vectors.c.o 00:02:52.859 [602/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vector_parsing.c.o 00:02:52.859 [603/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_pmd_cyclecount.c.o 00:02:52.859 [604/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_latency.c.o 00:02:53.116 [605/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_throughput.c.o 00:02:53.116 [606/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_test.c.o 00:02:53.116 [607/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_verify.c.o 00:02:53.373 [608/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_parser.c.o 00:02:53.373 [609/707] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:53.373 [610/707] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_main.c.o 00:02:53.373 [611/707] Linking static target lib/librte_vhost.a 00:02:53.373 [612/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_main.c.o 00:02:53.373 [613/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_main.c.o 00:02:53.373 [614/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_options.c.o 00:02:53.635 [615/707] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_benchmark.c.o 00:02:53.909 [616/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_common.c.o 00:02:53.909 [617/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_atq.c.o 00:02:53.909 [618/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_queue.c.o 00:02:54.193 [619/707] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:54.193 [620/707] Linking target lib/librte_vhost.so.24.0 00:02:54.452 [621/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_atq.c.o 00:02:54.452 [622/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_atq.c.o 00:02:54.711 [623/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_common.c.o 00:02:54.711 [624/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_queue.c.o 00:02:54.711 [625/707] Compiling C object app/dpdk-test-fib.p/test-fib_main.c.o 00:02:54.711 [626/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_queue.c.o 00:02:54.711 [627/707] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_actions_gen.c.o 00:02:55.083 [628/707] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_flow_gen.c.o 00:02:55.083 [629/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_test.c.o 00:02:55.083 [630/707] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_items_gen.c.o 00:02:55.083 [631/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_main.c.o 00:02:55.083 [632/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_parser.c.o 00:02:55.083 [633/707] Compiling C object app/dpdk-test-gpudev.p/test-gpudev_main.c.o 00:02:55.083 [634/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_options.c.o 00:02:55.342 [635/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_device_ops.c.o 00:02:55.342 [636/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_common.c.o 00:02:55.342 [637/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_common.c.o 00:02:55.342 [638/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_ops.c.o 00:02:55.342 [639/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_ordered.c.o 00:02:55.600 [640/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_interleave.c.o 00:02:55.600 [641/707] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_main.c.o 00:02:55.600 [642/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_stats.c.o 00:02:55.600 [643/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_config.c.o 00:02:55.857 [644/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_init.c.o 00:02:55.857 [645/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_main.c.o 00:02:55.857 [646/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_acl.c.o 00:02:55.857 [647/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_common.c.o 00:02:56.114 [648/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_hash.c.o 00:02:56.114 [649/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm.c.o 00:02:56.114 [650/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm_ipv6.c.o 00:02:56.114 [651/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_stub.c.o 00:02:56.374 [652/707] Compiling C object app/dpdk-testpmd.p/test-pmd_5tswap.c.o 00:02:56.374 [653/707] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_cman.c.o 00:02:56.374 [654/707] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_perf.c.o 00:02:56.633 [655/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_runtime.c.o 00:02:56.633 [656/707] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_mtr.c.o 00:02:56.633 [657/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_common.c.o 00:02:56.633 [658/707] Compiling C object app/dpdk-testpmd.p/test-pmd_cmd_flex_item.c.o 00:02:56.892 [659/707] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_tm.c.o 00:02:57.151 [660/707] Compiling C object app/dpdk-testpmd.p/test-pmd_flowgen.c.o 00:02:57.151 [661/707] Compiling C object app/dpdk-testpmd.p/test-pmd_iofwd.c.o 00:02:57.151 [662/707] Compiling C object app/dpdk-testpmd.p/test-pmd_ieee1588fwd.c.o 00:02:57.151 [663/707] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_table_action.c.o 00:02:57.151 [664/707] Compiling C object app/dpdk-testpmd.p/test-pmd_icmpecho.c.o 00:02:57.151 [665/707] Linking static target lib/librte_pipeline.a 00:02:57.409 [666/707] Compiling C object app/dpdk-testpmd.p/test-pmd_macfwd.c.o 00:02:57.409 [667/707] Compiling C object app/dpdk-testpmd.p/test-pmd_macswap.c.o 00:02:57.668 [668/707] Compiling C object app/dpdk-testpmd.p/test-pmd_recycle_mbufs.c.o 00:02:57.668 [669/707] Compiling C object app/dpdk-testpmd.p/test-pmd_csumonly.c.o 00:02:57.668 [670/707] Linking target app/dpdk-dumpcap 00:02:57.926 [671/707] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline.c.o 00:02:57.926 [672/707] Linking target app/dpdk-graph 00:02:57.926 [673/707] Linking target app/dpdk-pdump 00:02:57.926 [674/707] Compiling C object app/dpdk-testpmd.p/test-pmd_parameters.c.o 00:02:57.926 [675/707] Linking target app/dpdk-proc-info 00:02:58.185 [676/707] Linking target app/dpdk-test-acl 00:02:58.185 [677/707] Linking target app/dpdk-test-cmdline 00:02:58.185 [678/707] Linking target app/dpdk-test-bbdev 00:02:58.185 [679/707] Linking target app/dpdk-test-compress-perf 00:02:58.185 [680/707] Linking target app/dpdk-test-crypto-perf 00:02:58.443 [681/707] Linking target app/dpdk-test-dma-perf 00:02:58.443 [682/707] Linking target app/dpdk-test-eventdev 00:02:58.443 [683/707] Linking target app/dpdk-test-fib 00:02:58.702 [684/707] Linking target app/dpdk-test-flow-perf 00:02:58.702 [685/707] Linking target app/dpdk-test-gpudev 00:02:58.702 [686/707] Linking target app/dpdk-test-mldev 00:02:58.702 [687/707] Linking target app/dpdk-test-pipeline 00:02:58.961 [688/707] Compiling C object app/dpdk-testpmd.p/test-pmd_config.c.o 00:02:58.961 [689/707] Compiling C object app/dpdk-testpmd.p/test-pmd_rxonly.c.o 00:02:58.961 [690/707] Compiling C object app/dpdk-testpmd.p/test-pmd_shared_rxq_fwd.c.o 00:02:58.961 [691/707] Compiling C object app/dpdk-testpmd.p/test-pmd_noisy_vnf.c.o 00:02:58.961 [692/707] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_flow.c.o 00:02:58.961 [693/707] Compiling C object app/dpdk-testpmd.p/test-pmd_bpf_cmd.c.o 00:02:59.219 [694/707] Compiling C object app/dpdk-testpmd.p/test-pmd_util.c.o 00:02:59.219 [695/707] Generating lib/pipeline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:59.478 [696/707] Linking target lib/librte_pipeline.so.24.0 00:02:59.478 [697/707] Compiling C object app/dpdk-test-security-perf.p/test-security-perf_test_security_perf.c.o 00:02:59.478 [698/707] Compiling C object app/dpdk-test-sad.p/test-sad_main.c.o 00:02:59.737 [699/707] Compiling C object app/dpdk-testpmd.p/.._drivers_net_i40e_i40e_testpmd.c.o 00:02:59.737 [700/707] Compiling C object app/dpdk-test-regex.p/test-regex_main.c.o 00:02:59.737 [701/707] Compiling C object app/dpdk-testpmd.p/test-pmd_txonly.c.o 00:02:59.737 [702/707] Linking target app/dpdk-test-sad 00:02:59.996 [703/707] Compiling C object app/dpdk-test-security-perf.p/test_test_cryptodev_security_ipsec.c.o 00:02:59.996 [704/707] Linking target app/dpdk-test-regex 00:02:59.996 [705/707] Compiling C object app/dpdk-testpmd.p/test-pmd_testpmd.c.o 00:03:00.255 [706/707] Linking target app/dpdk-test-security-perf 00:03:00.516 [707/707] Linking target app/dpdk-testpmd 00:03:00.516 23:20:40 build_native_dpdk -- common/autobuild_common.sh@194 -- $ uname -s 00:03:00.516 23:20:40 build_native_dpdk -- common/autobuild_common.sh@194 -- $ [[ Linux == \F\r\e\e\B\S\D ]] 00:03:00.516 23:20:40 build_native_dpdk -- common/autobuild_common.sh@207 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 install 00:03:00.516 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:03:00.516 [0/1] Installing files. 00:03:00.780 Installing subdir /home/vagrant/spdk_repo/dpdk/examples to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples 00:03:00.780 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:03:00.780 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:03:00.780 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:03:00.780 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:03:00.780 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:03:00.780 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/README to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:00.780 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/dummy.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:00.780 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t1.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:00.780 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t2.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:00.780 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t3.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:00.780 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:00.780 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:00.780 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:00.780 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:00.780 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:00.780 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:00.780 Installing /home/vagrant/spdk_repo/dpdk/examples/common/pkt_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common 00:03:00.780 Installing /home/vagrant/spdk_repo/dpdk/examples/common/altivec/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/altivec 00:03:00.780 Installing /home/vagrant/spdk_repo/dpdk/examples/common/neon/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/neon 00:03:00.780 Installing /home/vagrant/spdk_repo/dpdk/examples/common/sse/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/sse 00:03:00.780 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:03:00.780 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:03:00.780 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:03:00.780 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/dmafwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:03:00.780 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool 00:03:00.780 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:00.780 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:00.780 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:00.780 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:00.780 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:00.780 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:00.780 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:00.780 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:00.780 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:00.780 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:00.780 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:00.780 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_tx.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:00.780 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:00.780 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:00.780 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:00.780 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:00.780 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:00.780 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_aes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:00.780 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ccm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:00.780 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_cmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:00.780 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ecdsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:00.780 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_gcm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:00.780 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_hmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:00.780 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_rsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:00.780 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_sha.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:00.780 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_tdes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:00.780 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_xts.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:00.780 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:00.780 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:03:00.780 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/flow_blocks.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:03:00.780 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:03:00.780 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:03:00.780 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:03:00.780 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:03:00.780 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:03:00.780 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:00.780 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:00.780 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:00.780 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:00.780 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:00.781 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:00.781 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:00.781 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:00.781 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:00.781 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:00.781 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:00.781 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:00.781 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:00.781 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:00.781 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:00.781 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:00.781 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:00.781 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:00.781 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:00.781 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:00.781 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:00.781 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:00.781 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:00.781 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:00.781 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:00.781 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:00.781 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:00.781 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/firewall.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:00.781 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:00.781 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow_crypto.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:00.781 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:00.781 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:00.781 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route_ecmp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:00.781 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/rss.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:00.781 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/tap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:00.781 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:03:00.781 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:03:00.781 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:00.781 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep0.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:00.781 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep1.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:00.781 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:00.781 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:00.781 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:00.781 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:00.781 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:00.781 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:00.781 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipip.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:00.781 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:00.781 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:00.781 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:00.781 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:00.781 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:00.781 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:00.781 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_process.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:00.781 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:00.781 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:00.781 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:00.781 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:00.781 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/rt.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:00.781 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:00.781 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:00.781 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:00.781 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp4.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:00.781 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp6.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:00.781 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/bypass_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:00.781 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:00.781 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs_secgw.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:00.781 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/data_rxtx.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:00.781 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/linux_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:00.781 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/load_env.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:00.781 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:00.781 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:00.781 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/run_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:00.781 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:00.781 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:00.781 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:00.781 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:00.781 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:00.781 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:00.781 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:00.781 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:00.781 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_ipv6opts.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:00.781 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:00.781 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:00.781 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:00.781 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:00.781 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:00.781 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:00.781 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:00.781 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:00.782 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_null_header_reconstruct.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:00.782 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:03:00.782 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:03:00.782 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:00.782 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:00.782 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:00.782 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/l2fwd-cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:00.782 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:03:00.782 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:03:00.782 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:00.782 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:00.782 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:00.782 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:00.782 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:00.782 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:00.782 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:00.782 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:00.782 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:00.782 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:00.782 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:03:00.782 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:03:00.782 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:00.782 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:00.782 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:00.782 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:00.782 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:03:00.782 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:03:00.782 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-macsec/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:03:00.782 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-macsec/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:03:00.782 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:03:00.782 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:03:00.782 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:03:00.782 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:03:00.782 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:00.782 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:00.782 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:00.782 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:00.782 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:00.782 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:00.782 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:00.782 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:00.782 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:00.782 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:00.782 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:00.782 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:00.782 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl_scalar.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:00.782 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:00.782 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:00.782 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:00.782 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:00.782 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:00.782 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:00.782 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:00.782 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_sequential.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:00.782 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:00.782 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:00.782 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:00.782 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:00.782 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_fib.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:00.782 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:00.782 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:00.782 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:00.782 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:00.782 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:00.782 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:00.782 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_route.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:00.782 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:00.782 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:00.782 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:00.782 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:00.782 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:00.782 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:03:00.782 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:03:00.782 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process 00:03:00.782 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp 00:03:00.782 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:03:00.782 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/client.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:03:00.782 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:00.782 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:00.782 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:00.782 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:00.782 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:00.782 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:00.782 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/shared 00:03:00.782 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:00.782 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:00.782 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:00.783 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:00.783 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:00.783 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:00.783 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:00.783 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:00.783 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:00.783 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:03:00.783 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:03:00.783 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:03:00.783 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:03:00.783 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/ntb_fwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:03:00.783 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:03:00.783 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:03:00.783 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:00.783 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:00.783 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:00.783 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:00.783 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:00.783 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:00.783 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:00.783 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:00.783 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:00.783 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:00.783 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ethdev.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:00.783 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:00.783 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:00.783 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_group_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:00.783 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:00.783 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_routing_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:00.783 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:00.783 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:00.783 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:00.783 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:00.783 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:00.783 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec_sa.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:00.783 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:00.783 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:00.783 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:00.783 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:00.783 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:00.783 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:00.783 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:00.783 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:00.783 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:00.783 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:00.783 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:00.783 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:00.783 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/packet.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:00.783 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/pcap.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:00.783 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:00.783 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:00.783 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:00.783 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:00.783 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/rss.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:00.783 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/rss.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:00.783 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:00.783 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:00.783 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:00.783 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:00.783 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:00.783 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:00.783 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:00.783 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:00.783 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:00.783 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:00.783 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:03:00.783 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/ptpclient.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:03:00.783 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:00.783 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:00.783 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:00.783 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:00.783 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:00.783 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:00.783 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/app_thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:00.783 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:00.783 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:00.783 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:00.783 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cmdline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:00.783 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:00.783 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:00.783 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:00.783 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:00.783 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_ov.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:00.783 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_pie.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:00.783 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_red.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:00.783 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/stats.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:00.784 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:03:00.784 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:03:00.784 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd 00:03:00.784 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_node/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:03:00.784 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_node/node.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:03:00.784 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:00.784 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:00.784 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:00.784 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:00.784 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:00.784 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:00.784 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/shared 00:03:00.784 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:03:00.784 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:03:00.784 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:03:00.784 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/basicfwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:03:00.784 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:03:00.784 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:03:00.784 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:00.784 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:00.784 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:00.784 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/vdpa_blk_compact.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:00.784 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:00.784 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:00.784 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:00.784 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/virtio_net.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:00.784 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:00.784 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:00.784 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk_spec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:00.784 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:00.784 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:00.784 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk_compat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:00.784 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:03:00.784 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:03:00.784 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:00.784 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:00.784 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:00.784 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:00.784 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:00.784 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:00.784 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:00.784 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_nop.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:00.784 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_x86.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:00.784 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:00.784 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:00.784 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:00.784 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:00.784 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:00.784 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:00.784 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:00.784 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:00.784 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:00.784 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:00.784 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:00.784 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:00.784 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:03:00.784 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:03:00.784 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:03:00.784 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:03:00.784 Installing lib/librte_log.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.784 Installing lib/librte_log.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:01.045 Installing lib/librte_kvargs.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:01.045 Installing lib/librte_kvargs.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:01.045 Installing lib/librte_telemetry.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:01.045 Installing lib/librte_telemetry.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:01.045 Installing lib/librte_eal.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:01.045 Installing lib/librte_eal.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:01.045 Installing lib/librte_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:01.045 Installing lib/librte_ring.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:01.045 Installing lib/librte_rcu.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:01.045 Installing lib/librte_rcu.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:01.045 Installing lib/librte_mempool.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:01.045 Installing lib/librte_mempool.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:01.045 Installing lib/librte_mbuf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:01.045 Installing lib/librte_mbuf.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:01.045 Installing lib/librte_net.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:01.045 Installing lib/librte_net.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:01.045 Installing lib/librte_meter.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:01.045 Installing lib/librte_meter.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:01.045 Installing lib/librte_ethdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:01.045 Installing lib/librte_ethdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:01.045 Installing lib/librte_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:01.045 Installing lib/librte_pci.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:01.045 Installing lib/librte_cmdline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:01.045 Installing lib/librte_cmdline.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:01.045 Installing lib/librte_metrics.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:01.045 Installing lib/librte_metrics.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:01.045 Installing lib/librte_hash.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:01.045 Installing lib/librte_hash.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:01.045 Installing lib/librte_timer.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:01.045 Installing lib/librte_timer.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:01.045 Installing lib/librte_acl.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:01.045 Installing lib/librte_acl.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:01.045 Installing lib/librte_bbdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:01.045 Installing lib/librte_bbdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:01.045 Installing lib/librte_bitratestats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:01.045 Installing lib/librte_bitratestats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:01.045 Installing lib/librte_bpf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:01.045 Installing lib/librte_bpf.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:01.045 Installing lib/librte_cfgfile.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:01.046 Installing lib/librte_cfgfile.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:01.046 Installing lib/librte_compressdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:01.046 Installing lib/librte_compressdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:01.046 Installing lib/librte_cryptodev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:01.046 Installing lib/librte_cryptodev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:01.046 Installing lib/librte_distributor.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:01.046 Installing lib/librte_distributor.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:01.046 Installing lib/librte_dmadev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:01.046 Installing lib/librte_dmadev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:01.046 Installing lib/librte_efd.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:01.046 Installing lib/librte_efd.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:01.046 Installing lib/librte_eventdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:01.046 Installing lib/librte_eventdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:01.046 Installing lib/librte_dispatcher.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:01.046 Installing lib/librte_dispatcher.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:01.046 Installing lib/librte_gpudev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:01.046 Installing lib/librte_gpudev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:01.046 Installing lib/librte_gro.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:01.046 Installing lib/librte_gro.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:01.046 Installing lib/librte_gso.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:01.046 Installing lib/librte_gso.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:01.046 Installing lib/librte_ip_frag.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:01.046 Installing lib/librte_ip_frag.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:01.046 Installing lib/librte_jobstats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:01.046 Installing lib/librte_jobstats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:01.046 Installing lib/librte_latencystats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:01.046 Installing lib/librte_latencystats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:01.046 Installing lib/librte_lpm.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:01.046 Installing lib/librte_lpm.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:01.046 Installing lib/librte_member.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:01.046 Installing lib/librte_member.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:01.046 Installing lib/librte_pcapng.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:01.046 Installing lib/librte_pcapng.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:01.046 Installing lib/librte_power.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:01.046 Installing lib/librte_power.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:01.046 Installing lib/librte_rawdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:01.046 Installing lib/librte_rawdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:01.046 Installing lib/librte_regexdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:01.046 Installing lib/librte_regexdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:01.046 Installing lib/librte_mldev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:01.046 Installing lib/librte_mldev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:01.046 Installing lib/librte_rib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:01.046 Installing lib/librte_rib.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:01.046 Installing lib/librte_reorder.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:01.046 Installing lib/librte_reorder.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:01.046 Installing lib/librte_sched.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:01.046 Installing lib/librte_sched.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:01.046 Installing lib/librte_security.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:01.046 Installing lib/librte_security.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:01.046 Installing lib/librte_stack.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:01.046 Installing lib/librte_stack.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:01.046 Installing lib/librte_vhost.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:01.046 Installing lib/librte_vhost.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:01.046 Installing lib/librte_ipsec.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:01.046 Installing lib/librte_ipsec.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:01.046 Installing lib/librte_pdcp.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:01.046 Installing lib/librte_pdcp.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:01.046 Installing lib/librte_fib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:01.046 Installing lib/librte_fib.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:01.046 Installing lib/librte_port.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:01.046 Installing lib/librte_port.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:01.046 Installing lib/librte_pdump.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:01.046 Installing lib/librte_pdump.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:01.046 Installing lib/librte_table.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:01.046 Installing lib/librte_table.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:01.046 Installing lib/librte_pipeline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:01.046 Installing lib/librte_pipeline.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:01.046 Installing lib/librte_graph.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:01.046 Installing lib/librte_graph.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:01.309 Installing lib/librte_node.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:01.309 Installing lib/librte_node.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:01.309 Installing drivers/librte_bus_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:01.309 Installing drivers/librte_bus_pci.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0 00:03:01.309 Installing drivers/librte_bus_vdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:01.309 Installing drivers/librte_bus_vdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0 00:03:01.309 Installing drivers/librte_mempool_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:01.309 Installing drivers/librte_mempool_ring.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0 00:03:01.309 Installing drivers/librte_net_i40e.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:01.309 Installing drivers/librte_net_i40e.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0 00:03:01.309 Installing app/dpdk-dumpcap to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:01.309 Installing app/dpdk-graph to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:01.309 Installing app/dpdk-pdump to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:01.309 Installing app/dpdk-proc-info to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:01.309 Installing app/dpdk-test-acl to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:01.309 Installing app/dpdk-test-bbdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:01.310 Installing app/dpdk-test-cmdline to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:01.310 Installing app/dpdk-test-compress-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:01.310 Installing app/dpdk-test-crypto-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:01.310 Installing app/dpdk-test-dma-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:01.310 Installing app/dpdk-test-eventdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:01.310 Installing app/dpdk-test-fib to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:01.310 Installing app/dpdk-test-flow-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:01.310 Installing app/dpdk-test-gpudev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:01.310 Installing app/dpdk-test-mldev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:01.310 Installing app/dpdk-test-pipeline to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:01.310 Installing app/dpdk-testpmd to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:01.310 Installing app/dpdk-test-regex to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:01.310 Installing app/dpdk-test-sad to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:01.310 Installing app/dpdk-test-security-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:01.310 Installing /home/vagrant/spdk_repo/dpdk/config/rte_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.310 Installing /home/vagrant/spdk_repo/dpdk/lib/log/rte_log.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.310 Installing /home/vagrant/spdk_repo/dpdk/lib/kvargs/rte_kvargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.310 Installing /home/vagrant/spdk_repo/dpdk/lib/telemetry/rte_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.310 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:01.310 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:01.310 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:01.310 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:01.310 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:01.310 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:01.310 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:01.310 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:01.310 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:01.310 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:01.310 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:01.310 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:01.310 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.310 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.310 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.310 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.310 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.310 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.310 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.310 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.310 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.310 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rtm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.310 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.310 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.310 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.310 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.310 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.310 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.310 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.310 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_alarm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.310 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitmap.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.310 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.310 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_branch_prediction.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.310 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bus.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.310 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_class.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.310 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.310 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_compat.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.310 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_debug.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.310 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_dev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.310 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_devargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.310 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.310 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_memconfig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.310 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.310 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_errno.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.310 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_epoll.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.310 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_fbarray.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.310 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hexdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.310 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hypervisor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.310 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_interrupts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.310 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_keepalive.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.310 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_launch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.310 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.310 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_lock_annotations.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.310 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_malloc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.310 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_mcslock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.310 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memory.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.310 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memzone.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.310 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_feature_defs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.310 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_features.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.310 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_per_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.310 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pflock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.310 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_random.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.310 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_reciprocal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.310 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqcount.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.310 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.310 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.310 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service_component.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.310 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_stdatomic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.310 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_string_fns.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.310 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_tailq.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.310 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_thread.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.310 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_ticketlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.310 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_time.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.310 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.310 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.310 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point_register.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.310 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_uuid.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.310 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_version.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.310 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_vfio.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.310 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/linux/include/rte_os.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.310 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.310 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.310 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.310 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.310 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_c11_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.310 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_generic_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.310 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.311 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.311 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.311 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.311 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_zc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.311 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.311 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.311 Installing /home/vagrant/spdk_repo/dpdk/lib/rcu/rte_rcu_qsbr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.311 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.311 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.311 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.311 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.311 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_ptype.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.311 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_pool_ops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.311 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_dyn.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.311 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ip.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.311 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_tcp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.311 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_udp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.311 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_tls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.311 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_dtls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.311 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_esp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.311 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_sctp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.311 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_icmp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.311 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_arp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.311 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ether.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.311 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_macsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.311 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_vxlan.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.311 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gre.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.311 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gtp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.311 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.311 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.311 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_mpls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.311 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_higig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.311 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ecpri.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.311 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_pdcp_hdr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.311 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_geneve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.311 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_l2tpv2.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.311 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ppp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.311 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.311 Installing /home/vagrant/spdk_repo/dpdk/lib/meter/rte_meter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.311 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_cman.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.311 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.311 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.311 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_dev_info.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.311 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.311 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.311 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.311 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.311 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.311 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.311 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.311 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_eth_ctrl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.311 Installing /home/vagrant/spdk_repo/dpdk/lib/pci/rte_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.311 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.311 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.311 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_num.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.311 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_ipaddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.311 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_etheraddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.311 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_string.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.311 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_rdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.311 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_vt100.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.311 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_socket.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.311 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_cirbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.311 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_portlist.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.311 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.311 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.311 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_fbk_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.311 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.311 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.311 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_jhash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.311 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.311 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.311 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.311 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.311 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_sw.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.311 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.311 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_x86_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.311 Installing /home/vagrant/spdk_repo/dpdk/lib/timer/rte_timer.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.311 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.311 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl_osdep.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.311 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.311 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.311 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_op.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.311 Installing /home/vagrant/spdk_repo/dpdk/lib/bitratestats/rte_bitrate.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.311 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/bpf_def.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.311 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.311 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.311 Installing /home/vagrant/spdk_repo/dpdk/lib/cfgfile/rte_cfgfile.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.311 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_compressdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.311 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_comp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.311 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.311 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.311 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.311 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_sym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.311 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_asym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.311 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.311 Installing /home/vagrant/spdk_repo/dpdk/lib/distributor/rte_distributor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.311 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.311 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.311 Installing /home/vagrant/spdk_repo/dpdk/lib/efd/rte_efd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.311 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_crypto_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.311 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_dma_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.311 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_rx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.311 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_tx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.311 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.311 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_timer_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.311 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.311 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.312 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.312 Installing /home/vagrant/spdk_repo/dpdk/lib/dispatcher/rte_dispatcher.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.312 Installing /home/vagrant/spdk_repo/dpdk/lib/gpudev/rte_gpudev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.312 Installing /home/vagrant/spdk_repo/dpdk/lib/gro/rte_gro.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.312 Installing /home/vagrant/spdk_repo/dpdk/lib/gso/rte_gso.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.312 Installing /home/vagrant/spdk_repo/dpdk/lib/ip_frag/rte_ip_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.312 Installing /home/vagrant/spdk_repo/dpdk/lib/jobstats/rte_jobstats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.312 Installing /home/vagrant/spdk_repo/dpdk/lib/latencystats/rte_latencystats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.312 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.312 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.312 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.312 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.312 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_scalar.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.312 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.312 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.312 Installing /home/vagrant/spdk_repo/dpdk/lib/member/rte_member.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.312 Installing /home/vagrant/spdk_repo/dpdk/lib/pcapng/rte_pcapng.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.312 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.312 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_guest_channel.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.312 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_pmd_mgmt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.312 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_uncore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.312 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.312 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.312 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.312 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.312 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.312 Installing /home/vagrant/spdk_repo/dpdk/lib/mldev/rte_mldev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.312 Installing /home/vagrant/spdk_repo/dpdk/lib/mldev/rte_mldev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.312 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.312 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.312 Installing /home/vagrant/spdk_repo/dpdk/lib/reorder/rte_reorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.312 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_approx.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.312 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_red.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.312 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.312 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.312 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_pie.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.312 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.312 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.312 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.312 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_std.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.312 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.312 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.312 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_c11.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.312 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_stubs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.312 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vdpa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.312 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.312 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_async.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.312 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.312 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.312 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.312 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sad.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.312 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_group.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.312 Installing /home/vagrant/spdk_repo/dpdk/lib/pdcp/rte_pdcp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.312 Installing /home/vagrant/spdk_repo/dpdk/lib/pdcp/rte_pdcp_group.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.312 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.312 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.312 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.312 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.312 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.312 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ras.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.312 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.312 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.312 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.312 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.312 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sym_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.312 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.312 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.312 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.312 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.312 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.312 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.312 Installing /home/vagrant/spdk_repo/dpdk/lib/pdump/rte_pdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.312 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.312 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.312 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.312 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_em.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.312 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_learner.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.312 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_selector.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.312 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_wm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.312 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.312 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.312 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_array.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.312 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.312 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_cuckoo.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.312 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.312 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.312 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm_ipv6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.312 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_stub.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.312 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.312 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.312 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.312 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.312 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_port_in_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.312 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_table_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.312 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_ipsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.312 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.312 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_extern.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.312 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_ctl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.312 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.312 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_worker.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.312 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_model_mcore_dispatch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.312 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_model_rtc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.312 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_worker_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.312 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_eth_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.312 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_ip4_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.312 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_ip6_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.313 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_udp4_input_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.313 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/pci/rte_bus_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.313 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/vdev/rte_bus_vdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.313 Installing /home/vagrant/spdk_repo/dpdk/drivers/net/i40e/rte_pmd_i40e.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.313 Installing /home/vagrant/spdk_repo/dpdk/buildtools/dpdk-cmdline-gen.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:01.313 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-devbind.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:01.313 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-pmdinfo.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:01.313 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-telemetry.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:01.313 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-hugepages.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:01.313 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-rss-flows.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:01.313 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/rte_build_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:01.313 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk-libs.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:03:01.313 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:03:01.313 Installing symlink pointing to librte_log.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_log.so.24 00:03:01.313 Installing symlink pointing to librte_log.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_log.so 00:03:01.313 Installing symlink pointing to librte_kvargs.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so.24 00:03:01.313 Installing symlink pointing to librte_kvargs.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so 00:03:01.313 Installing symlink pointing to librte_telemetry.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so.24 00:03:01.313 Installing symlink pointing to librte_telemetry.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so 00:03:01.313 Installing symlink pointing to librte_eal.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so.24 00:03:01.313 Installing symlink pointing to librte_eal.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so 00:03:01.313 Installing symlink pointing to librte_ring.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so.24 00:03:01.313 Installing symlink pointing to librte_ring.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so 00:03:01.313 Installing symlink pointing to librte_rcu.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so.24 00:03:01.313 Installing symlink pointing to librte_rcu.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so 00:03:01.313 Installing symlink pointing to librte_mempool.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so.24 00:03:01.313 Installing symlink pointing to librte_mempool.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so 00:03:01.313 Installing symlink pointing to librte_mbuf.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so.24 00:03:01.313 Installing symlink pointing to librte_mbuf.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so 00:03:01.313 Installing symlink pointing to librte_net.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so.24 00:03:01.313 Installing symlink pointing to librte_net.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so 00:03:01.313 Installing symlink pointing to librte_meter.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so.24 00:03:01.313 Installing symlink pointing to librte_meter.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so 00:03:01.313 Installing symlink pointing to librte_ethdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so.24 00:03:01.313 Installing symlink pointing to librte_ethdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so 00:03:01.313 Installing symlink pointing to librte_pci.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so.24 00:03:01.313 Installing symlink pointing to librte_pci.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so 00:03:01.313 Installing symlink pointing to librte_cmdline.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so.24 00:03:01.313 Installing symlink pointing to librte_cmdline.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so 00:03:01.313 Installing symlink pointing to librte_metrics.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so.24 00:03:01.313 Installing symlink pointing to librte_metrics.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so 00:03:01.313 Installing symlink pointing to librte_hash.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so.24 00:03:01.313 Installing symlink pointing to librte_hash.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so 00:03:01.313 Installing symlink pointing to librte_timer.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so.24 00:03:01.313 Installing symlink pointing to librte_timer.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so 00:03:01.313 Installing symlink pointing to librte_acl.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so.24 00:03:01.313 Installing symlink pointing to librte_acl.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so 00:03:01.313 Installing symlink pointing to librte_bbdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so.24 00:03:01.313 Installing symlink pointing to librte_bbdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so 00:03:01.313 Installing symlink pointing to librte_bitratestats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so.24 00:03:01.313 Installing symlink pointing to librte_bitratestats.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so 00:03:01.313 Installing symlink pointing to librte_bpf.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so.24 00:03:01.313 Installing symlink pointing to librte_bpf.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so 00:03:01.313 Installing symlink pointing to librte_cfgfile.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so.24 00:03:01.313 Installing symlink pointing to librte_cfgfile.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so 00:03:01.313 Installing symlink pointing to librte_compressdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so.24 00:03:01.313 Installing symlink pointing to librte_compressdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so 00:03:01.313 Installing symlink pointing to librte_cryptodev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so.24 00:03:01.313 Installing symlink pointing to librte_cryptodev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so 00:03:01.313 Installing symlink pointing to librte_distributor.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so.24 00:03:01.313 Installing symlink pointing to librte_distributor.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so 00:03:01.313 Installing symlink pointing to librte_dmadev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so.24 00:03:01.313 Installing symlink pointing to librte_dmadev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so 00:03:01.313 Installing symlink pointing to librte_efd.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so.24 00:03:01.313 Installing symlink pointing to librte_efd.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so 00:03:01.313 Installing symlink pointing to librte_eventdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so.24 00:03:01.313 Installing symlink pointing to librte_eventdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so 00:03:01.313 Installing symlink pointing to librte_dispatcher.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dispatcher.so.24 00:03:01.313 Installing symlink pointing to librte_dispatcher.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dispatcher.so 00:03:01.313 Installing symlink pointing to librte_gpudev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so.24 00:03:01.313 Installing symlink pointing to librte_gpudev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so 00:03:01.313 Installing symlink pointing to librte_gro.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so.24 00:03:01.313 Installing symlink pointing to librte_gro.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so 00:03:01.313 Installing symlink pointing to librte_gso.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so.24 00:03:01.313 Installing symlink pointing to librte_gso.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so 00:03:01.313 Installing symlink pointing to librte_ip_frag.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so.24 00:03:01.313 Installing symlink pointing to librte_ip_frag.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so 00:03:01.313 Installing symlink pointing to librte_jobstats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so.24 00:03:01.313 Installing symlink pointing to librte_jobstats.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so 00:03:01.313 Installing symlink pointing to librte_latencystats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so.24 00:03:01.313 Installing symlink pointing to librte_latencystats.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so 00:03:01.313 Installing symlink pointing to librte_lpm.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so.24 00:03:01.313 Installing symlink pointing to librte_lpm.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so 00:03:01.313 Installing symlink pointing to librte_member.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so.24 00:03:01.313 Installing symlink pointing to librte_member.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so 00:03:01.313 Installing symlink pointing to librte_pcapng.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so.24 00:03:01.313 Installing symlink pointing to librte_pcapng.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so 00:03:01.313 Installing symlink pointing to librte_power.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so.24 00:03:01.313 Installing symlink pointing to librte_power.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so 00:03:01.313 Installing symlink pointing to librte_rawdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so.24 00:03:01.313 Installing symlink pointing to librte_rawdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so 00:03:01.313 Installing symlink pointing to librte_regexdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so.24 00:03:01.313 Installing symlink pointing to librte_regexdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so 00:03:01.313 Installing symlink pointing to librte_mldev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mldev.so.24 00:03:01.313 Installing symlink pointing to librte_mldev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mldev.so 00:03:01.313 Installing symlink pointing to librte_rib.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so.24 00:03:01.313 Installing symlink pointing to librte_rib.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so 00:03:01.313 Installing symlink pointing to librte_reorder.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so.24 00:03:01.313 Installing symlink pointing to librte_reorder.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so 00:03:01.313 Installing symlink pointing to librte_sched.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so.24 00:03:01.313 Installing symlink pointing to librte_sched.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so 00:03:01.313 Installing symlink pointing to librte_security.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so.24 00:03:01.313 Installing symlink pointing to librte_security.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so 00:03:01.313 './librte_bus_pci.so' -> 'dpdk/pmds-24.0/librte_bus_pci.so' 00:03:01.313 './librte_bus_pci.so.24' -> 'dpdk/pmds-24.0/librte_bus_pci.so.24' 00:03:01.313 './librte_bus_pci.so.24.0' -> 'dpdk/pmds-24.0/librte_bus_pci.so.24.0' 00:03:01.313 './librte_bus_vdev.so' -> 'dpdk/pmds-24.0/librte_bus_vdev.so' 00:03:01.313 './librte_bus_vdev.so.24' -> 'dpdk/pmds-24.0/librte_bus_vdev.so.24' 00:03:01.313 './librte_bus_vdev.so.24.0' -> 'dpdk/pmds-24.0/librte_bus_vdev.so.24.0' 00:03:01.314 './librte_mempool_ring.so' -> 'dpdk/pmds-24.0/librte_mempool_ring.so' 00:03:01.314 './librte_mempool_ring.so.24' -> 'dpdk/pmds-24.0/librte_mempool_ring.so.24' 00:03:01.314 './librte_mempool_ring.so.24.0' -> 'dpdk/pmds-24.0/librte_mempool_ring.so.24.0' 00:03:01.314 './librte_net_i40e.so' -> 'dpdk/pmds-24.0/librte_net_i40e.so' 00:03:01.314 './librte_net_i40e.so.24' -> 'dpdk/pmds-24.0/librte_net_i40e.so.24' 00:03:01.314 './librte_net_i40e.so.24.0' -> 'dpdk/pmds-24.0/librte_net_i40e.so.24.0' 00:03:01.314 Installing symlink pointing to librte_stack.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so.24 00:03:01.314 Installing symlink pointing to librte_stack.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so 00:03:01.314 Installing symlink pointing to librte_vhost.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so.24 00:03:01.314 Installing symlink pointing to librte_vhost.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so 00:03:01.314 Installing symlink pointing to librte_ipsec.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so.24 00:03:01.314 Installing symlink pointing to librte_ipsec.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so 00:03:01.314 Installing symlink pointing to librte_pdcp.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdcp.so.24 00:03:01.314 Installing symlink pointing to librte_pdcp.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdcp.so 00:03:01.314 Installing symlink pointing to librte_fib.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so.24 00:03:01.314 Installing symlink pointing to librte_fib.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so 00:03:01.314 Installing symlink pointing to librte_port.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so.24 00:03:01.314 Installing symlink pointing to librte_port.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so 00:03:01.314 Installing symlink pointing to librte_pdump.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so.24 00:03:01.314 Installing symlink pointing to librte_pdump.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so 00:03:01.314 Installing symlink pointing to librte_table.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so.24 00:03:01.314 Installing symlink pointing to librte_table.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so 00:03:01.314 Installing symlink pointing to librte_pipeline.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so.24 00:03:01.314 Installing symlink pointing to librte_pipeline.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so 00:03:01.314 Installing symlink pointing to librte_graph.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so.24 00:03:01.314 Installing symlink pointing to librte_graph.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so 00:03:01.314 Installing symlink pointing to librte_node.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so.24 00:03:01.314 Installing symlink pointing to librte_node.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so 00:03:01.314 Installing symlink pointing to librte_bus_pci.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so.24 00:03:01.314 Installing symlink pointing to librte_bus_pci.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so 00:03:01.314 Installing symlink pointing to librte_bus_vdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so.24 00:03:01.314 Installing symlink pointing to librte_bus_vdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so 00:03:01.314 Installing symlink pointing to librte_mempool_ring.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so.24 00:03:01.314 Installing symlink pointing to librte_mempool_ring.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so 00:03:01.314 Installing symlink pointing to librte_net_i40e.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so.24 00:03:01.314 Installing symlink pointing to librte_net_i40e.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so 00:03:01.314 Running custom install script '/bin/sh /home/vagrant/spdk_repo/dpdk/config/../buildtools/symlink-drivers-solibs.sh lib dpdk/pmds-24.0' 00:03:01.314 23:20:41 build_native_dpdk -- common/autobuild_common.sh@213 -- $ cat 00:03:01.314 23:20:41 build_native_dpdk -- common/autobuild_common.sh@218 -- $ cd /home/vagrant/spdk_repo/spdk 00:03:01.314 00:03:01.314 real 0m44.472s 00:03:01.314 user 4m56.624s 00:03:01.314 sys 0m55.060s 00:03:01.314 23:20:41 build_native_dpdk -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:03:01.314 23:20:41 build_native_dpdk -- common/autotest_common.sh@10 -- $ set +x 00:03:01.314 ************************************ 00:03:01.314 END TEST build_native_dpdk 00:03:01.314 ************************************ 00:03:01.574 23:20:41 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:03:01.574 23:20:41 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:03:01.574 23:20:41 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:03:01.574 23:20:41 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:03:01.574 23:20:41 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:03:01.574 23:20:41 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:03:01.574 23:20:41 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:03:01.574 23:20:41 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-raid5f --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --with-shared 00:03:01.574 Using /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig for additional libs... 00:03:01.833 DPDK libraries: /home/vagrant/spdk_repo/dpdk/build/lib 00:03:01.833 DPDK includes: //home/vagrant/spdk_repo/dpdk/build/include 00:03:01.833 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:03:02.093 Using 'verbs' RDMA provider 00:03:18.370 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:03:36.538 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:03:36.538 Creating mk/config.mk...done. 00:03:36.538 Creating mk/cc.flags.mk...done. 00:03:36.538 Type 'make' to build. 00:03:36.538 23:21:14 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:03:36.538 23:21:14 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:03:36.538 23:21:14 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:03:36.538 23:21:14 -- common/autotest_common.sh@10 -- $ set +x 00:03:36.538 ************************************ 00:03:36.538 START TEST make 00:03:36.538 ************************************ 00:03:36.538 23:21:14 make -- common/autotest_common.sh@1125 -- $ make -j10 00:03:36.538 make[1]: Nothing to be done for 'all'. 00:04:23.262 CC lib/log/log.o 00:04:23.262 CC lib/log/log_flags.o 00:04:23.262 CC lib/log/log_deprecated.o 00:04:23.262 CC lib/ut/ut.o 00:04:23.262 CC lib/ut_mock/mock.o 00:04:23.262 LIB libspdk_ut.a 00:04:23.262 LIB libspdk_log.a 00:04:23.262 SO libspdk_ut.so.2.0 00:04:23.262 SO libspdk_log.so.7.0 00:04:23.262 LIB libspdk_ut_mock.a 00:04:23.262 SYMLINK libspdk_log.so 00:04:23.262 SYMLINK libspdk_ut.so 00:04:23.262 SO libspdk_ut_mock.so.6.0 00:04:23.262 SYMLINK libspdk_ut_mock.so 00:04:23.262 CC lib/ioat/ioat.o 00:04:23.262 CXX lib/trace_parser/trace.o 00:04:23.262 CC lib/util/crc16.o 00:04:23.262 CC lib/util/cpuset.o 00:04:23.262 CC lib/util/base64.o 00:04:23.262 CC lib/util/crc32.o 00:04:23.262 CC lib/util/bit_array.o 00:04:23.262 CC lib/util/crc32c.o 00:04:23.262 CC lib/dma/dma.o 00:04:23.262 CC lib/vfio_user/host/vfio_user_pci.o 00:04:23.262 CC lib/vfio_user/host/vfio_user.o 00:04:23.262 CC lib/util/crc32_ieee.o 00:04:23.262 CC lib/util/crc64.o 00:04:23.262 CC lib/util/dif.o 00:04:23.262 CC lib/util/fd.o 00:04:23.262 LIB libspdk_dma.a 00:04:23.262 CC lib/util/fd_group.o 00:04:23.262 SO libspdk_dma.so.5.0 00:04:23.262 CC lib/util/file.o 00:04:23.262 CC lib/util/hexlify.o 00:04:23.262 LIB libspdk_ioat.a 00:04:23.262 SYMLINK libspdk_dma.so 00:04:23.262 CC lib/util/iov.o 00:04:23.262 SO libspdk_ioat.so.7.0 00:04:23.262 CC lib/util/math.o 00:04:23.262 CC lib/util/net.o 00:04:23.262 SYMLINK libspdk_ioat.so 00:04:23.262 CC lib/util/pipe.o 00:04:23.262 LIB libspdk_vfio_user.a 00:04:23.262 CC lib/util/strerror_tls.o 00:04:23.262 SO libspdk_vfio_user.so.5.0 00:04:23.262 CC lib/util/string.o 00:04:23.262 CC lib/util/uuid.o 00:04:23.262 SYMLINK libspdk_vfio_user.so 00:04:23.262 CC lib/util/xor.o 00:04:23.262 CC lib/util/zipf.o 00:04:23.262 CC lib/util/md5.o 00:04:23.262 LIB libspdk_util.a 00:04:23.262 SO libspdk_util.so.10.0 00:04:23.262 LIB libspdk_trace_parser.a 00:04:23.262 SYMLINK libspdk_util.so 00:04:23.262 SO libspdk_trace_parser.so.6.0 00:04:23.262 SYMLINK libspdk_trace_parser.so 00:04:23.262 CC lib/rdma_provider/common.o 00:04:23.262 CC lib/rdma_provider/rdma_provider_verbs.o 00:04:23.262 CC lib/json/json_parse.o 00:04:23.262 CC lib/json/json_util.o 00:04:23.262 CC lib/json/json_write.o 00:04:23.262 CC lib/rdma_utils/rdma_utils.o 00:04:23.262 CC lib/env_dpdk/env.o 00:04:23.262 CC lib/vmd/vmd.o 00:04:23.262 CC lib/conf/conf.o 00:04:23.262 CC lib/idxd/idxd.o 00:04:23.262 CC lib/idxd/idxd_user.o 00:04:23.262 LIB libspdk_rdma_provider.a 00:04:23.262 CC lib/idxd/idxd_kernel.o 00:04:23.262 LIB libspdk_conf.a 00:04:23.262 SO libspdk_rdma_provider.so.6.0 00:04:23.262 SO libspdk_conf.so.6.0 00:04:23.262 LIB libspdk_rdma_utils.a 00:04:23.262 CC lib/env_dpdk/memory.o 00:04:23.262 SO libspdk_rdma_utils.so.1.0 00:04:23.262 LIB libspdk_json.a 00:04:23.262 SYMLINK libspdk_rdma_provider.so 00:04:23.262 SYMLINK libspdk_conf.so 00:04:23.262 CC lib/vmd/led.o 00:04:23.262 CC lib/env_dpdk/pci.o 00:04:23.262 SO libspdk_json.so.6.0 00:04:23.262 SYMLINK libspdk_rdma_utils.so 00:04:23.262 CC lib/env_dpdk/init.o 00:04:23.262 CC lib/env_dpdk/threads.o 00:04:23.262 CC lib/env_dpdk/pci_ioat.o 00:04:23.262 SYMLINK libspdk_json.so 00:04:23.262 CC lib/env_dpdk/pci_virtio.o 00:04:23.262 CC lib/env_dpdk/pci_vmd.o 00:04:23.262 CC lib/env_dpdk/pci_idxd.o 00:04:23.262 CC lib/env_dpdk/pci_event.o 00:04:23.262 CC lib/jsonrpc/jsonrpc_server.o 00:04:23.262 CC lib/env_dpdk/sigbus_handler.o 00:04:23.262 CC lib/env_dpdk/pci_dpdk.o 00:04:23.262 LIB libspdk_idxd.a 00:04:23.262 CC lib/env_dpdk/pci_dpdk_2207.o 00:04:23.262 SO libspdk_idxd.so.12.1 00:04:23.262 CC lib/env_dpdk/pci_dpdk_2211.o 00:04:23.262 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:04:23.263 SYMLINK libspdk_idxd.so 00:04:23.263 CC lib/jsonrpc/jsonrpc_client.o 00:04:23.263 LIB libspdk_vmd.a 00:04:23.263 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:04:23.263 SO libspdk_vmd.so.6.0 00:04:23.263 SYMLINK libspdk_vmd.so 00:04:23.263 LIB libspdk_jsonrpc.a 00:04:23.263 SO libspdk_jsonrpc.so.6.0 00:04:23.263 SYMLINK libspdk_jsonrpc.so 00:04:23.263 LIB libspdk_env_dpdk.a 00:04:23.263 CC lib/rpc/rpc.o 00:04:23.263 SO libspdk_env_dpdk.so.15.0 00:04:23.263 LIB libspdk_rpc.a 00:04:23.263 SYMLINK libspdk_env_dpdk.so 00:04:23.263 SO libspdk_rpc.so.6.0 00:04:23.263 SYMLINK libspdk_rpc.so 00:04:23.263 CC lib/trace/trace_flags.o 00:04:23.263 CC lib/trace/trace.o 00:04:23.263 CC lib/trace/trace_rpc.o 00:04:23.263 CC lib/notify/notify_rpc.o 00:04:23.263 CC lib/notify/notify.o 00:04:23.263 CC lib/keyring/keyring.o 00:04:23.263 CC lib/keyring/keyring_rpc.o 00:04:23.263 LIB libspdk_notify.a 00:04:23.263 SO libspdk_notify.so.6.0 00:04:23.263 SYMLINK libspdk_notify.so 00:04:23.263 LIB libspdk_trace.a 00:04:23.263 LIB libspdk_keyring.a 00:04:23.263 SO libspdk_trace.so.11.0 00:04:23.263 SO libspdk_keyring.so.2.0 00:04:23.263 SYMLINK libspdk_trace.so 00:04:23.263 SYMLINK libspdk_keyring.so 00:04:23.263 CC lib/sock/sock.o 00:04:23.263 CC lib/sock/sock_rpc.o 00:04:23.263 CC lib/thread/iobuf.o 00:04:23.263 CC lib/thread/thread.o 00:04:23.263 LIB libspdk_sock.a 00:04:23.263 SO libspdk_sock.so.10.0 00:04:23.263 SYMLINK libspdk_sock.so 00:04:23.263 CC lib/nvme/nvme_ctrlr_cmd.o 00:04:23.263 CC lib/nvme/nvme_ctrlr.o 00:04:23.263 CC lib/nvme/nvme_fabric.o 00:04:23.263 CC lib/nvme/nvme_ns_cmd.o 00:04:23.263 CC lib/nvme/nvme_pcie_common.o 00:04:23.263 CC lib/nvme/nvme_ns.o 00:04:23.263 CC lib/nvme/nvme_pcie.o 00:04:23.263 CC lib/nvme/nvme.o 00:04:23.263 CC lib/nvme/nvme_qpair.o 00:04:23.263 LIB libspdk_thread.a 00:04:23.263 SO libspdk_thread.so.10.1 00:04:23.263 CC lib/nvme/nvme_quirks.o 00:04:23.263 SYMLINK libspdk_thread.so 00:04:23.263 CC lib/nvme/nvme_transport.o 00:04:23.263 CC lib/nvme/nvme_discovery.o 00:04:23.263 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:04:23.263 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:04:23.263 CC lib/nvme/nvme_tcp.o 00:04:23.263 CC lib/nvme/nvme_opal.o 00:04:23.263 CC lib/nvme/nvme_io_msg.o 00:04:23.263 CC lib/nvme/nvme_poll_group.o 00:04:23.263 CC lib/nvme/nvme_zns.o 00:04:23.263 CC lib/nvme/nvme_stubs.o 00:04:23.263 CC lib/accel/accel.o 00:04:23.263 CC lib/nvme/nvme_auth.o 00:04:23.263 CC lib/nvme/nvme_cuse.o 00:04:23.263 CC lib/accel/accel_rpc.o 00:04:23.263 CC lib/nvme/nvme_rdma.o 00:04:23.263 CC lib/accel/accel_sw.o 00:04:23.522 CC lib/blob/blobstore.o 00:04:23.782 CC lib/init/json_config.o 00:04:23.782 CC lib/blob/request.o 00:04:23.782 CC lib/virtio/virtio.o 00:04:24.041 CC lib/init/subsystem.o 00:04:24.041 CC lib/init/subsystem_rpc.o 00:04:24.041 CC lib/init/rpc.o 00:04:24.041 CC lib/blob/zeroes.o 00:04:24.041 CC lib/virtio/virtio_vhost_user.o 00:04:24.041 CC lib/blob/blob_bs_dev.o 00:04:24.041 CC lib/virtio/virtio_vfio_user.o 00:04:24.041 LIB libspdk_accel.a 00:04:24.041 CC lib/virtio/virtio_pci.o 00:04:24.041 LIB libspdk_init.a 00:04:24.041 SO libspdk_accel.so.16.0 00:04:24.300 SO libspdk_init.so.6.0 00:04:24.300 CC lib/fsdev/fsdev.o 00:04:24.300 CC lib/fsdev/fsdev_io.o 00:04:24.300 CC lib/fsdev/fsdev_rpc.o 00:04:24.300 SYMLINK libspdk_accel.so 00:04:24.300 SYMLINK libspdk_init.so 00:04:24.300 CC lib/event/reactor.o 00:04:24.300 CC lib/event/app.o 00:04:24.300 CC lib/event/log_rpc.o 00:04:24.300 CC lib/event/app_rpc.o 00:04:24.300 CC lib/bdev/bdev.o 00:04:24.563 LIB libspdk_virtio.a 00:04:24.563 SO libspdk_virtio.so.7.0 00:04:24.563 SYMLINK libspdk_virtio.so 00:04:24.563 CC lib/bdev/bdev_rpc.o 00:04:24.563 CC lib/bdev/bdev_zone.o 00:04:24.563 CC lib/event/scheduler_static.o 00:04:24.563 LIB libspdk_nvme.a 00:04:24.563 CC lib/bdev/part.o 00:04:24.563 CC lib/bdev/scsi_nvme.o 00:04:24.827 SO libspdk_nvme.so.14.0 00:04:24.827 LIB libspdk_fsdev.a 00:04:24.827 SO libspdk_fsdev.so.1.0 00:04:24.827 LIB libspdk_event.a 00:04:24.827 SO libspdk_event.so.14.0 00:04:24.827 SYMLINK libspdk_fsdev.so 00:04:25.086 SYMLINK libspdk_event.so 00:04:25.086 SYMLINK libspdk_nvme.so 00:04:25.346 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:04:25.916 LIB libspdk_fuse_dispatcher.a 00:04:25.916 SO libspdk_fuse_dispatcher.so.1.0 00:04:26.175 SYMLINK libspdk_fuse_dispatcher.so 00:04:26.807 LIB libspdk_blob.a 00:04:27.083 SO libspdk_blob.so.11.0 00:04:27.083 SYMLINK libspdk_blob.so 00:04:27.083 LIB libspdk_bdev.a 00:04:27.083 SO libspdk_bdev.so.16.0 00:04:27.342 SYMLINK libspdk_bdev.so 00:04:27.342 CC lib/blobfs/blobfs.o 00:04:27.342 CC lib/blobfs/tree.o 00:04:27.342 CC lib/lvol/lvol.o 00:04:27.342 CC lib/ublk/ublk.o 00:04:27.342 CC lib/ublk/ublk_rpc.o 00:04:27.342 CC lib/scsi/dev.o 00:04:27.342 CC lib/scsi/lun.o 00:04:27.601 CC lib/ftl/ftl_core.o 00:04:27.601 CC lib/nbd/nbd.o 00:04:27.601 CC lib/nvmf/ctrlr.o 00:04:27.601 CC lib/nvmf/ctrlr_discovery.o 00:04:27.601 CC lib/nvmf/ctrlr_bdev.o 00:04:27.601 CC lib/nvmf/subsystem.o 00:04:27.859 CC lib/scsi/port.o 00:04:27.859 CC lib/ftl/ftl_init.o 00:04:27.859 CC lib/nbd/nbd_rpc.o 00:04:27.859 CC lib/scsi/scsi.o 00:04:28.116 LIB libspdk_nbd.a 00:04:28.116 CC lib/ftl/ftl_layout.o 00:04:28.116 CC lib/nvmf/nvmf.o 00:04:28.116 SO libspdk_nbd.so.7.0 00:04:28.116 CC lib/scsi/scsi_bdev.o 00:04:28.116 SYMLINK libspdk_nbd.so 00:04:28.116 CC lib/scsi/scsi_pr.o 00:04:28.116 LIB libspdk_ublk.a 00:04:28.116 SO libspdk_ublk.so.3.0 00:04:28.116 LIB libspdk_blobfs.a 00:04:28.374 SYMLINK libspdk_ublk.so 00:04:28.374 CC lib/ftl/ftl_debug.o 00:04:28.374 SO libspdk_blobfs.so.10.0 00:04:28.374 SYMLINK libspdk_blobfs.so 00:04:28.375 CC lib/nvmf/nvmf_rpc.o 00:04:28.375 CC lib/ftl/ftl_io.o 00:04:28.375 LIB libspdk_lvol.a 00:04:28.375 CC lib/ftl/ftl_sb.o 00:04:28.375 SO libspdk_lvol.so.10.0 00:04:28.375 CC lib/scsi/scsi_rpc.o 00:04:28.375 SYMLINK libspdk_lvol.so 00:04:28.375 CC lib/ftl/ftl_l2p.o 00:04:28.375 CC lib/ftl/ftl_l2p_flat.o 00:04:28.633 CC lib/ftl/ftl_nv_cache.o 00:04:28.633 CC lib/ftl/ftl_band.o 00:04:28.633 CC lib/scsi/task.o 00:04:28.633 CC lib/ftl/ftl_band_ops.o 00:04:28.633 CC lib/ftl/ftl_writer.o 00:04:28.633 CC lib/nvmf/transport.o 00:04:28.892 LIB libspdk_scsi.a 00:04:28.892 SO libspdk_scsi.so.9.0 00:04:28.892 CC lib/ftl/ftl_rq.o 00:04:28.892 CC lib/ftl/ftl_reloc.o 00:04:28.892 CC lib/ftl/ftl_l2p_cache.o 00:04:29.150 SYMLINK libspdk_scsi.so 00:04:29.150 CC lib/nvmf/tcp.o 00:04:29.150 CC lib/nvmf/stubs.o 00:04:29.150 CC lib/nvmf/mdns_server.o 00:04:29.150 CC lib/nvmf/rdma.o 00:04:29.150 CC lib/nvmf/auth.o 00:04:29.408 CC lib/ftl/ftl_p2l.o 00:04:29.408 CC lib/ftl/ftl_p2l_log.o 00:04:29.408 CC lib/ftl/mngt/ftl_mngt.o 00:04:29.408 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:04:29.408 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:04:29.665 CC lib/ftl/mngt/ftl_mngt_startup.o 00:04:29.665 CC lib/ftl/mngt/ftl_mngt_md.o 00:04:29.665 CC lib/ftl/mngt/ftl_mngt_misc.o 00:04:29.665 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:04:29.665 CC lib/iscsi/conn.o 00:04:29.665 CC lib/iscsi/init_grp.o 00:04:29.923 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:04:29.923 CC lib/vhost/vhost.o 00:04:29.923 CC lib/vhost/vhost_rpc.o 00:04:29.923 CC lib/vhost/vhost_scsi.o 00:04:29.923 CC lib/vhost/vhost_blk.o 00:04:29.923 CC lib/ftl/mngt/ftl_mngt_band.o 00:04:29.923 CC lib/iscsi/iscsi.o 00:04:30.182 CC lib/iscsi/param.o 00:04:30.441 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:04:30.441 CC lib/iscsi/portal_grp.o 00:04:30.441 CC lib/iscsi/tgt_node.o 00:04:30.441 CC lib/vhost/rte_vhost_user.o 00:04:30.441 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:04:30.700 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:04:30.700 CC lib/iscsi/iscsi_subsystem.o 00:04:30.700 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:04:30.958 CC lib/ftl/utils/ftl_conf.o 00:04:30.958 CC lib/iscsi/iscsi_rpc.o 00:04:30.958 CC lib/iscsi/task.o 00:04:30.958 CC lib/ftl/utils/ftl_md.o 00:04:30.958 CC lib/ftl/utils/ftl_mempool.o 00:04:30.958 CC lib/ftl/utils/ftl_bitmap.o 00:04:30.958 CC lib/ftl/utils/ftl_property.o 00:04:30.958 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:04:31.217 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:04:31.217 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:04:31.217 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:04:31.217 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:04:31.217 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:04:31.217 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:04:31.217 CC lib/ftl/upgrade/ftl_sb_v3.o 00:04:31.217 CC lib/ftl/upgrade/ftl_sb_v5.o 00:04:31.476 CC lib/ftl/nvc/ftl_nvc_dev.o 00:04:31.476 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:04:31.476 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:04:31.476 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:04:31.476 CC lib/ftl/base/ftl_base_dev.o 00:04:31.476 LIB libspdk_nvmf.a 00:04:31.476 CC lib/ftl/base/ftl_base_bdev.o 00:04:31.476 CC lib/ftl/ftl_trace.o 00:04:31.476 LIB libspdk_vhost.a 00:04:31.734 SO libspdk_vhost.so.8.0 00:04:31.734 LIB libspdk_iscsi.a 00:04:31.734 SO libspdk_nvmf.so.19.0 00:04:31.734 SO libspdk_iscsi.so.8.0 00:04:31.734 SYMLINK libspdk_vhost.so 00:04:31.734 LIB libspdk_ftl.a 00:04:31.993 SYMLINK libspdk_nvmf.so 00:04:31.993 SYMLINK libspdk_iscsi.so 00:04:31.993 SO libspdk_ftl.so.9.0 00:04:32.252 SYMLINK libspdk_ftl.so 00:04:32.820 CC module/env_dpdk/env_dpdk_rpc.o 00:04:32.820 CC module/scheduler/dynamic/scheduler_dynamic.o 00:04:32.820 CC module/accel/dsa/accel_dsa.o 00:04:32.820 CC module/blob/bdev/blob_bdev.o 00:04:32.820 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:04:32.820 CC module/accel/ioat/accel_ioat.o 00:04:32.820 CC module/sock/posix/posix.o 00:04:32.820 CC module/fsdev/aio/fsdev_aio.o 00:04:32.820 CC module/keyring/file/keyring.o 00:04:32.820 CC module/accel/error/accel_error.o 00:04:32.820 LIB libspdk_env_dpdk_rpc.a 00:04:32.820 SO libspdk_env_dpdk_rpc.so.6.0 00:04:32.820 SYMLINK libspdk_env_dpdk_rpc.so 00:04:32.820 CC module/fsdev/aio/fsdev_aio_rpc.o 00:04:32.820 LIB libspdk_scheduler_dpdk_governor.a 00:04:32.820 CC module/keyring/file/keyring_rpc.o 00:04:32.820 SO libspdk_scheduler_dpdk_governor.so.4.0 00:04:32.820 CC module/accel/error/accel_error_rpc.o 00:04:32.820 CC module/accel/ioat/accel_ioat_rpc.o 00:04:33.079 LIB libspdk_scheduler_dynamic.a 00:04:33.079 SO libspdk_scheduler_dynamic.so.4.0 00:04:33.079 SYMLINK libspdk_scheduler_dpdk_governor.so 00:04:33.079 CC module/fsdev/aio/linux_aio_mgr.o 00:04:33.079 CC module/accel/dsa/accel_dsa_rpc.o 00:04:33.079 LIB libspdk_keyring_file.a 00:04:33.079 SYMLINK libspdk_scheduler_dynamic.so 00:04:33.079 LIB libspdk_blob_bdev.a 00:04:33.079 LIB libspdk_accel_ioat.a 00:04:33.079 SO libspdk_blob_bdev.so.11.0 00:04:33.079 SO libspdk_keyring_file.so.2.0 00:04:33.079 LIB libspdk_accel_error.a 00:04:33.079 SO libspdk_accel_ioat.so.6.0 00:04:33.079 SO libspdk_accel_error.so.2.0 00:04:33.079 SYMLINK libspdk_keyring_file.so 00:04:33.079 SYMLINK libspdk_blob_bdev.so 00:04:33.079 LIB libspdk_accel_dsa.a 00:04:33.079 SYMLINK libspdk_accel_error.so 00:04:33.079 SYMLINK libspdk_accel_ioat.so 00:04:33.079 SO libspdk_accel_dsa.so.5.0 00:04:33.337 CC module/keyring/linux/keyring.o 00:04:33.337 CC module/keyring/linux/keyring_rpc.o 00:04:33.337 CC module/scheduler/gscheduler/gscheduler.o 00:04:33.337 SYMLINK libspdk_accel_dsa.so 00:04:33.337 CC module/accel/iaa/accel_iaa.o 00:04:33.337 CC module/accel/iaa/accel_iaa_rpc.o 00:04:33.337 LIB libspdk_keyring_linux.a 00:04:33.337 CC module/bdev/delay/vbdev_delay.o 00:04:33.337 LIB libspdk_scheduler_gscheduler.a 00:04:33.337 CC module/bdev/error/vbdev_error.o 00:04:33.337 SO libspdk_keyring_linux.so.1.0 00:04:33.337 CC module/blobfs/bdev/blobfs_bdev.o 00:04:33.337 CC module/bdev/gpt/gpt.o 00:04:33.337 SO libspdk_scheduler_gscheduler.so.4.0 00:04:33.337 LIB libspdk_fsdev_aio.a 00:04:33.596 SYMLINK libspdk_keyring_linux.so 00:04:33.596 CC module/bdev/gpt/vbdev_gpt.o 00:04:33.596 SYMLINK libspdk_scheduler_gscheduler.so 00:04:33.596 SO libspdk_fsdev_aio.so.1.0 00:04:33.596 LIB libspdk_sock_posix.a 00:04:33.596 LIB libspdk_accel_iaa.a 00:04:33.596 SO libspdk_sock_posix.so.6.0 00:04:33.596 SYMLINK libspdk_fsdev_aio.so 00:04:33.596 SO libspdk_accel_iaa.so.3.0 00:04:33.596 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:04:33.596 CC module/bdev/delay/vbdev_delay_rpc.o 00:04:33.596 SYMLINK libspdk_accel_iaa.so 00:04:33.596 CC module/bdev/error/vbdev_error_rpc.o 00:04:33.596 CC module/bdev/lvol/vbdev_lvol.o 00:04:33.596 SYMLINK libspdk_sock_posix.so 00:04:33.596 CC module/bdev/malloc/bdev_malloc.o 00:04:33.596 CC module/bdev/malloc/bdev_malloc_rpc.o 00:04:33.596 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:04:33.596 CC module/bdev/null/bdev_null.o 00:04:33.854 LIB libspdk_bdev_gpt.a 00:04:33.854 LIB libspdk_blobfs_bdev.a 00:04:33.854 CC module/bdev/null/bdev_null_rpc.o 00:04:33.854 SO libspdk_bdev_gpt.so.6.0 00:04:33.854 LIB libspdk_bdev_delay.a 00:04:33.854 SO libspdk_blobfs_bdev.so.6.0 00:04:33.854 LIB libspdk_bdev_error.a 00:04:33.854 SO libspdk_bdev_delay.so.6.0 00:04:33.854 SO libspdk_bdev_error.so.6.0 00:04:33.854 SYMLINK libspdk_bdev_gpt.so 00:04:33.854 SYMLINK libspdk_blobfs_bdev.so 00:04:33.854 SYMLINK libspdk_bdev_delay.so 00:04:33.854 SYMLINK libspdk_bdev_error.so 00:04:34.113 LIB libspdk_bdev_null.a 00:04:34.113 CC module/bdev/nvme/bdev_nvme.o 00:04:34.113 CC module/bdev/raid/bdev_raid.o 00:04:34.113 CC module/bdev/passthru/vbdev_passthru.o 00:04:34.113 SO libspdk_bdev_null.so.6.0 00:04:34.113 CC module/bdev/split/vbdev_split.o 00:04:34.113 CC module/bdev/split/vbdev_split_rpc.o 00:04:34.113 LIB libspdk_bdev_malloc.a 00:04:34.113 CC module/bdev/zone_block/vbdev_zone_block.o 00:04:34.113 CC module/bdev/aio/bdev_aio.o 00:04:34.113 SO libspdk_bdev_malloc.so.6.0 00:04:34.113 SYMLINK libspdk_bdev_null.so 00:04:34.113 CC module/bdev/aio/bdev_aio_rpc.o 00:04:34.113 LIB libspdk_bdev_lvol.a 00:04:34.113 SYMLINK libspdk_bdev_malloc.so 00:04:34.113 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:04:34.113 SO libspdk_bdev_lvol.so.6.0 00:04:34.113 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:04:34.113 SYMLINK libspdk_bdev_lvol.so 00:04:34.372 LIB libspdk_bdev_split.a 00:04:34.372 SO libspdk_bdev_split.so.6.0 00:04:34.372 LIB libspdk_bdev_passthru.a 00:04:34.372 CC module/bdev/nvme/bdev_nvme_rpc.o 00:04:34.372 SO libspdk_bdev_passthru.so.6.0 00:04:34.372 SYMLINK libspdk_bdev_split.so 00:04:34.372 LIB libspdk_bdev_zone_block.a 00:04:34.372 CC module/bdev/nvme/nvme_rpc.o 00:04:34.372 CC module/bdev/ftl/bdev_ftl.o 00:04:34.372 CC module/bdev/iscsi/bdev_iscsi.o 00:04:34.372 LIB libspdk_bdev_aio.a 00:04:34.372 SYMLINK libspdk_bdev_passthru.so 00:04:34.372 SO libspdk_bdev_zone_block.so.6.0 00:04:34.372 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:04:34.372 SO libspdk_bdev_aio.so.6.0 00:04:34.630 SYMLINK libspdk_bdev_zone_block.so 00:04:34.630 SYMLINK libspdk_bdev_aio.so 00:04:34.630 CC module/bdev/ftl/bdev_ftl_rpc.o 00:04:34.630 CC module/bdev/virtio/bdev_virtio_scsi.o 00:04:34.630 CC module/bdev/nvme/bdev_mdns_client.o 00:04:34.630 CC module/bdev/virtio/bdev_virtio_blk.o 00:04:34.630 CC module/bdev/nvme/vbdev_opal.o 00:04:34.630 CC module/bdev/nvme/vbdev_opal_rpc.o 00:04:34.630 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:04:34.888 LIB libspdk_bdev_ftl.a 00:04:34.888 SO libspdk_bdev_ftl.so.6.0 00:04:34.888 LIB libspdk_bdev_iscsi.a 00:04:34.888 SO libspdk_bdev_iscsi.so.6.0 00:04:34.888 SYMLINK libspdk_bdev_ftl.so 00:04:34.888 CC module/bdev/raid/bdev_raid_rpc.o 00:04:34.888 SYMLINK libspdk_bdev_iscsi.so 00:04:34.888 CC module/bdev/raid/bdev_raid_sb.o 00:04:34.888 CC module/bdev/virtio/bdev_virtio_rpc.o 00:04:34.888 CC module/bdev/raid/raid0.o 00:04:34.888 CC module/bdev/raid/raid1.o 00:04:34.888 CC module/bdev/raid/concat.o 00:04:35.148 CC module/bdev/raid/raid5f.o 00:04:35.148 LIB libspdk_bdev_virtio.a 00:04:35.148 SO libspdk_bdev_virtio.so.6.0 00:04:35.148 SYMLINK libspdk_bdev_virtio.so 00:04:35.407 LIB libspdk_bdev_raid.a 00:04:35.667 SO libspdk_bdev_raid.so.6.0 00:04:35.667 SYMLINK libspdk_bdev_raid.so 00:04:36.605 LIB libspdk_bdev_nvme.a 00:04:36.605 SO libspdk_bdev_nvme.so.7.0 00:04:36.605 SYMLINK libspdk_bdev_nvme.so 00:04:37.173 CC module/event/subsystems/keyring/keyring.o 00:04:37.173 CC module/event/subsystems/vmd/vmd.o 00:04:37.173 CC module/event/subsystems/vmd/vmd_rpc.o 00:04:37.173 CC module/event/subsystems/sock/sock.o 00:04:37.173 CC module/event/subsystems/scheduler/scheduler.o 00:04:37.173 CC module/event/subsystems/fsdev/fsdev.o 00:04:37.173 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:04:37.173 CC module/event/subsystems/iobuf/iobuf.o 00:04:37.173 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:04:37.433 LIB libspdk_event_vhost_blk.a 00:04:37.433 LIB libspdk_event_scheduler.a 00:04:37.433 LIB libspdk_event_sock.a 00:04:37.433 LIB libspdk_event_keyring.a 00:04:37.433 LIB libspdk_event_fsdev.a 00:04:37.433 SO libspdk_event_sock.so.5.0 00:04:37.433 SO libspdk_event_vhost_blk.so.3.0 00:04:37.433 SO libspdk_event_scheduler.so.4.0 00:04:37.433 LIB libspdk_event_iobuf.a 00:04:37.433 SO libspdk_event_keyring.so.1.0 00:04:37.433 SO libspdk_event_fsdev.so.1.0 00:04:37.433 LIB libspdk_event_vmd.a 00:04:37.433 SO libspdk_event_iobuf.so.3.0 00:04:37.433 SYMLINK libspdk_event_sock.so 00:04:37.433 SO libspdk_event_vmd.so.6.0 00:04:37.433 SYMLINK libspdk_event_vhost_blk.so 00:04:37.434 SYMLINK libspdk_event_scheduler.so 00:04:37.434 SYMLINK libspdk_event_keyring.so 00:04:37.434 SYMLINK libspdk_event_fsdev.so 00:04:37.434 SYMLINK libspdk_event_iobuf.so 00:04:37.434 SYMLINK libspdk_event_vmd.so 00:04:38.003 CC module/event/subsystems/accel/accel.o 00:04:38.003 LIB libspdk_event_accel.a 00:04:38.003 SO libspdk_event_accel.so.6.0 00:04:38.262 SYMLINK libspdk_event_accel.so 00:04:38.522 CC module/event/subsystems/bdev/bdev.o 00:04:38.780 LIB libspdk_event_bdev.a 00:04:38.780 SO libspdk_event_bdev.so.6.0 00:04:38.780 SYMLINK libspdk_event_bdev.so 00:04:39.348 CC module/event/subsystems/scsi/scsi.o 00:04:39.348 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:04:39.348 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:04:39.348 CC module/event/subsystems/nbd/nbd.o 00:04:39.348 CC module/event/subsystems/ublk/ublk.o 00:04:39.348 LIB libspdk_event_scsi.a 00:04:39.348 LIB libspdk_event_ublk.a 00:04:39.348 LIB libspdk_event_nbd.a 00:04:39.348 SO libspdk_event_scsi.so.6.0 00:04:39.348 SO libspdk_event_nbd.so.6.0 00:04:39.348 SO libspdk_event_ublk.so.3.0 00:04:39.348 LIB libspdk_event_nvmf.a 00:04:39.348 SYMLINK libspdk_event_nbd.so 00:04:39.348 SYMLINK libspdk_event_scsi.so 00:04:39.348 SYMLINK libspdk_event_ublk.so 00:04:39.348 SO libspdk_event_nvmf.so.6.0 00:04:39.607 SYMLINK libspdk_event_nvmf.so 00:04:39.865 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:04:39.865 CC module/event/subsystems/iscsi/iscsi.o 00:04:39.865 LIB libspdk_event_vhost_scsi.a 00:04:39.865 LIB libspdk_event_iscsi.a 00:04:40.124 SO libspdk_event_vhost_scsi.so.3.0 00:04:40.124 SO libspdk_event_iscsi.so.6.0 00:04:40.124 SYMLINK libspdk_event_vhost_scsi.so 00:04:40.124 SYMLINK libspdk_event_iscsi.so 00:04:40.384 SO libspdk.so.6.0 00:04:40.384 SYMLINK libspdk.so 00:04:40.643 CXX app/trace/trace.o 00:04:40.643 CC app/trace_record/trace_record.o 00:04:40.643 CC app/spdk_lspci/spdk_lspci.o 00:04:40.643 CC examples/interrupt_tgt/interrupt_tgt.o 00:04:40.643 CC app/nvmf_tgt/nvmf_main.o 00:04:40.643 CC app/iscsi_tgt/iscsi_tgt.o 00:04:40.643 CC app/spdk_tgt/spdk_tgt.o 00:04:40.643 CC examples/util/zipf/zipf.o 00:04:40.643 CC examples/ioat/perf/perf.o 00:04:40.643 CC test/thread/poller_perf/poller_perf.o 00:04:40.643 LINK spdk_lspci 00:04:40.901 LINK interrupt_tgt 00:04:40.901 LINK nvmf_tgt 00:04:40.901 LINK iscsi_tgt 00:04:40.901 LINK poller_perf 00:04:40.901 LINK zipf 00:04:40.901 LINK spdk_trace_record 00:04:40.901 LINK spdk_tgt 00:04:40.901 LINK ioat_perf 00:04:40.901 CC app/spdk_nvme_perf/perf.o 00:04:40.901 LINK spdk_trace 00:04:41.159 CC app/spdk_nvme_identify/identify.o 00:04:41.159 TEST_HEADER include/spdk/accel.h 00:04:41.159 TEST_HEADER include/spdk/accel_module.h 00:04:41.159 TEST_HEADER include/spdk/assert.h 00:04:41.159 CC app/spdk_nvme_discover/discovery_aer.o 00:04:41.159 TEST_HEADER include/spdk/barrier.h 00:04:41.159 TEST_HEADER include/spdk/base64.h 00:04:41.159 TEST_HEADER include/spdk/bdev.h 00:04:41.159 TEST_HEADER include/spdk/bdev_module.h 00:04:41.159 TEST_HEADER include/spdk/bdev_zone.h 00:04:41.159 TEST_HEADER include/spdk/bit_array.h 00:04:41.159 TEST_HEADER include/spdk/bit_pool.h 00:04:41.159 CC examples/ioat/verify/verify.o 00:04:41.159 TEST_HEADER include/spdk/blob_bdev.h 00:04:41.159 TEST_HEADER include/spdk/blobfs_bdev.h 00:04:41.159 TEST_HEADER include/spdk/blobfs.h 00:04:41.159 TEST_HEADER include/spdk/blob.h 00:04:41.159 TEST_HEADER include/spdk/conf.h 00:04:41.159 TEST_HEADER include/spdk/config.h 00:04:41.159 TEST_HEADER include/spdk/cpuset.h 00:04:41.159 TEST_HEADER include/spdk/crc16.h 00:04:41.159 TEST_HEADER include/spdk/crc32.h 00:04:41.159 TEST_HEADER include/spdk/crc64.h 00:04:41.159 TEST_HEADER include/spdk/dif.h 00:04:41.159 TEST_HEADER include/spdk/dma.h 00:04:41.159 TEST_HEADER include/spdk/endian.h 00:04:41.159 TEST_HEADER include/spdk/env_dpdk.h 00:04:41.159 TEST_HEADER include/spdk/env.h 00:04:41.159 TEST_HEADER include/spdk/event.h 00:04:41.159 TEST_HEADER include/spdk/fd_group.h 00:04:41.159 TEST_HEADER include/spdk/fd.h 00:04:41.159 TEST_HEADER include/spdk/file.h 00:04:41.159 TEST_HEADER include/spdk/fsdev.h 00:04:41.159 TEST_HEADER include/spdk/fsdev_module.h 00:04:41.159 TEST_HEADER include/spdk/ftl.h 00:04:41.159 TEST_HEADER include/spdk/fuse_dispatcher.h 00:04:41.159 TEST_HEADER include/spdk/gpt_spec.h 00:04:41.159 TEST_HEADER include/spdk/hexlify.h 00:04:41.159 TEST_HEADER include/spdk/histogram_data.h 00:04:41.159 TEST_HEADER include/spdk/idxd.h 00:04:41.159 TEST_HEADER include/spdk/idxd_spec.h 00:04:41.159 TEST_HEADER include/spdk/init.h 00:04:41.159 TEST_HEADER include/spdk/ioat.h 00:04:41.159 TEST_HEADER include/spdk/ioat_spec.h 00:04:41.159 TEST_HEADER include/spdk/iscsi_spec.h 00:04:41.159 TEST_HEADER include/spdk/json.h 00:04:41.159 TEST_HEADER include/spdk/jsonrpc.h 00:04:41.159 TEST_HEADER include/spdk/keyring.h 00:04:41.159 TEST_HEADER include/spdk/keyring_module.h 00:04:41.159 TEST_HEADER include/spdk/likely.h 00:04:41.159 TEST_HEADER include/spdk/log.h 00:04:41.159 TEST_HEADER include/spdk/lvol.h 00:04:41.159 CC test/dma/test_dma/test_dma.o 00:04:41.159 CC test/app/bdev_svc/bdev_svc.o 00:04:41.159 TEST_HEADER include/spdk/md5.h 00:04:41.159 TEST_HEADER include/spdk/memory.h 00:04:41.159 TEST_HEADER include/spdk/mmio.h 00:04:41.159 TEST_HEADER include/spdk/nbd.h 00:04:41.159 TEST_HEADER include/spdk/net.h 00:04:41.159 TEST_HEADER include/spdk/notify.h 00:04:41.159 TEST_HEADER include/spdk/nvme.h 00:04:41.159 TEST_HEADER include/spdk/nvme_intel.h 00:04:41.159 TEST_HEADER include/spdk/nvme_ocssd.h 00:04:41.159 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:04:41.159 TEST_HEADER include/spdk/nvme_spec.h 00:04:41.159 TEST_HEADER include/spdk/nvme_zns.h 00:04:41.159 TEST_HEADER include/spdk/nvmf_cmd.h 00:04:41.159 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:04:41.159 TEST_HEADER include/spdk/nvmf.h 00:04:41.159 TEST_HEADER include/spdk/nvmf_spec.h 00:04:41.159 TEST_HEADER include/spdk/nvmf_transport.h 00:04:41.159 TEST_HEADER include/spdk/opal.h 00:04:41.159 TEST_HEADER include/spdk/opal_spec.h 00:04:41.159 TEST_HEADER include/spdk/pci_ids.h 00:04:41.159 TEST_HEADER include/spdk/pipe.h 00:04:41.159 CC test/env/mem_callbacks/mem_callbacks.o 00:04:41.159 TEST_HEADER include/spdk/queue.h 00:04:41.159 TEST_HEADER include/spdk/reduce.h 00:04:41.159 TEST_HEADER include/spdk/rpc.h 00:04:41.159 CC test/event/event_perf/event_perf.o 00:04:41.159 TEST_HEADER include/spdk/scheduler.h 00:04:41.159 LINK spdk_nvme_discover 00:04:41.159 TEST_HEADER include/spdk/scsi.h 00:04:41.159 TEST_HEADER include/spdk/scsi_spec.h 00:04:41.418 TEST_HEADER include/spdk/sock.h 00:04:41.418 TEST_HEADER include/spdk/stdinc.h 00:04:41.418 TEST_HEADER include/spdk/string.h 00:04:41.418 TEST_HEADER include/spdk/thread.h 00:04:41.418 TEST_HEADER include/spdk/trace.h 00:04:41.418 TEST_HEADER include/spdk/trace_parser.h 00:04:41.418 TEST_HEADER include/spdk/tree.h 00:04:41.418 TEST_HEADER include/spdk/ublk.h 00:04:41.418 TEST_HEADER include/spdk/util.h 00:04:41.418 TEST_HEADER include/spdk/uuid.h 00:04:41.418 TEST_HEADER include/spdk/version.h 00:04:41.418 LINK verify 00:04:41.418 TEST_HEADER include/spdk/vfio_user_pci.h 00:04:41.418 TEST_HEADER include/spdk/vfio_user_spec.h 00:04:41.418 TEST_HEADER include/spdk/vhost.h 00:04:41.418 TEST_HEADER include/spdk/vmd.h 00:04:41.418 TEST_HEADER include/spdk/xor.h 00:04:41.418 TEST_HEADER include/spdk/zipf.h 00:04:41.418 CXX test/cpp_headers/accel.o 00:04:41.418 CC examples/thread/thread/thread_ex.o 00:04:41.418 LINK bdev_svc 00:04:41.418 LINK event_perf 00:04:41.418 CXX test/cpp_headers/accel_module.o 00:04:41.418 CC test/env/vtophys/vtophys.o 00:04:41.418 CC test/rpc_client/rpc_client_test.o 00:04:41.675 LINK thread 00:04:41.675 CXX test/cpp_headers/assert.o 00:04:41.675 LINK vtophys 00:04:41.675 CC test/event/reactor/reactor.o 00:04:41.675 LINK rpc_client_test 00:04:41.675 LINK test_dma 00:04:41.675 LINK spdk_nvme_perf 00:04:41.675 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:04:41.675 CXX test/cpp_headers/barrier.o 00:04:41.933 LINK mem_callbacks 00:04:41.933 LINK reactor 00:04:41.933 CC examples/sock/hello_world/hello_sock.o 00:04:41.933 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:04:41.933 CC examples/vmd/lsvmd/lsvmd.o 00:04:41.933 CXX test/cpp_headers/base64.o 00:04:41.933 CC test/event/reactor_perf/reactor_perf.o 00:04:41.933 LINK spdk_nvme_identify 00:04:41.933 CC test/app/histogram_perf/histogram_perf.o 00:04:41.933 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:04:41.933 CC app/spdk_top/spdk_top.o 00:04:42.191 LINK lsvmd 00:04:42.191 CXX test/cpp_headers/bdev.o 00:04:42.192 LINK reactor_perf 00:04:42.192 CXX test/cpp_headers/bdev_module.o 00:04:42.192 LINK histogram_perf 00:04:42.192 LINK hello_sock 00:04:42.192 LINK env_dpdk_post_init 00:04:42.192 LINK nvme_fuzz 00:04:42.192 CXX test/cpp_headers/bdev_zone.o 00:04:42.192 CC examples/vmd/led/led.o 00:04:42.450 CC test/event/app_repeat/app_repeat.o 00:04:42.450 CC test/app/jsoncat/jsoncat.o 00:04:42.450 CC test/app/stub/stub.o 00:04:42.450 CC test/env/memory/memory_ut.o 00:04:42.450 CXX test/cpp_headers/bit_array.o 00:04:42.450 CC test/accel/dif/dif.o 00:04:42.450 LINK led 00:04:42.450 LINK app_repeat 00:04:42.450 LINK jsoncat 00:04:42.450 CC test/blobfs/mkfs/mkfs.o 00:04:42.450 LINK stub 00:04:42.450 CXX test/cpp_headers/bit_pool.o 00:04:42.708 LINK mkfs 00:04:42.708 CXX test/cpp_headers/blob_bdev.o 00:04:42.708 CC test/event/scheduler/scheduler.o 00:04:42.708 CC examples/idxd/perf/perf.o 00:04:42.708 CC test/nvme/aer/aer.o 00:04:42.967 CC test/lvol/esnap/esnap.o 00:04:42.967 CXX test/cpp_headers/blobfs_bdev.o 00:04:42.967 LINK spdk_top 00:04:42.967 LINK scheduler 00:04:42.967 CXX test/cpp_headers/blobfs.o 00:04:42.967 CC app/vhost/vhost.o 00:04:43.225 LINK aer 00:04:43.226 LINK dif 00:04:43.226 LINK idxd_perf 00:04:43.226 CXX test/cpp_headers/blob.o 00:04:43.226 LINK vhost 00:04:43.226 CC test/nvme/reset/reset.o 00:04:43.226 CC examples/fsdev/hello_world/hello_fsdev.o 00:04:43.226 CXX test/cpp_headers/conf.o 00:04:43.226 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:04:43.484 CC app/spdk_dd/spdk_dd.o 00:04:43.484 LINK reset 00:04:43.484 CXX test/cpp_headers/config.o 00:04:43.484 LINK memory_ut 00:04:43.484 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:04:43.484 CXX test/cpp_headers/cpuset.o 00:04:43.484 CC app/fio/nvme/fio_plugin.o 00:04:43.484 CC test/bdev/bdevio/bdevio.o 00:04:43.484 LINK hello_fsdev 00:04:43.748 CXX test/cpp_headers/crc16.o 00:04:43.748 CC test/nvme/sgl/sgl.o 00:04:43.748 LINK iscsi_fuzz 00:04:43.748 CC test/env/pci/pci_ut.o 00:04:43.748 LINK spdk_dd 00:04:43.748 CXX test/cpp_headers/crc32.o 00:04:43.748 CC examples/accel/perf/accel_perf.o 00:04:43.748 LINK vhost_fuzz 00:04:44.010 CXX test/cpp_headers/crc64.o 00:04:44.010 LINK sgl 00:04:44.010 LINK bdevio 00:04:44.010 CXX test/cpp_headers/dif.o 00:04:44.010 CC app/fio/bdev/fio_plugin.o 00:04:44.010 CXX test/cpp_headers/dma.o 00:04:44.010 CC test/nvme/e2edp/nvme_dp.o 00:04:44.010 LINK spdk_nvme 00:04:44.010 LINK pci_ut 00:04:44.268 CXX test/cpp_headers/endian.o 00:04:44.268 CC examples/blob/hello_world/hello_blob.o 00:04:44.268 CC examples/blob/cli/blobcli.o 00:04:44.268 CC test/nvme/overhead/overhead.o 00:04:44.268 CC examples/nvme/hello_world/hello_world.o 00:04:44.268 CXX test/cpp_headers/env_dpdk.o 00:04:44.268 LINK accel_perf 00:04:44.268 CXX test/cpp_headers/env.o 00:04:44.268 LINK nvme_dp 00:04:44.526 LINK hello_blob 00:04:44.526 LINK spdk_bdev 00:04:44.526 LINK hello_world 00:04:44.526 LINK overhead 00:04:44.526 CXX test/cpp_headers/event.o 00:04:44.526 CC examples/nvme/reconnect/reconnect.o 00:04:44.526 CXX test/cpp_headers/fd_group.o 00:04:44.526 CC examples/nvme/nvme_manage/nvme_manage.o 00:04:44.526 CC test/nvme/err_injection/err_injection.o 00:04:44.785 CC examples/nvme/arbitration/arbitration.o 00:04:44.785 LINK blobcli 00:04:44.785 CXX test/cpp_headers/fd.o 00:04:44.785 CC examples/nvme/hotplug/hotplug.o 00:04:44.785 CC examples/nvme/cmb_copy/cmb_copy.o 00:04:44.785 CC examples/nvme/abort/abort.o 00:04:44.785 LINK err_injection 00:04:44.785 CXX test/cpp_headers/file.o 00:04:44.785 LINK reconnect 00:04:44.785 LINK cmb_copy 00:04:45.044 LINK hotplug 00:04:45.044 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:04:45.044 CXX test/cpp_headers/fsdev.o 00:04:45.044 LINK arbitration 00:04:45.044 CC test/nvme/startup/startup.o 00:04:45.044 CC test/nvme/reserve/reserve.o 00:04:45.044 CXX test/cpp_headers/fsdev_module.o 00:04:45.044 LINK abort 00:04:45.044 LINK pmr_persistence 00:04:45.044 LINK nvme_manage 00:04:45.044 CC test/nvme/simple_copy/simple_copy.o 00:04:45.303 CC examples/bdev/hello_world/hello_bdev.o 00:04:45.303 LINK startup 00:04:45.303 CC examples/bdev/bdevperf/bdevperf.o 00:04:45.303 CXX test/cpp_headers/ftl.o 00:04:45.303 LINK reserve 00:04:45.303 CC test/nvme/connect_stress/connect_stress.o 00:04:45.303 CXX test/cpp_headers/fuse_dispatcher.o 00:04:45.303 CC test/nvme/boot_partition/boot_partition.o 00:04:45.303 LINK simple_copy 00:04:45.303 LINK hello_bdev 00:04:45.303 CC test/nvme/compliance/nvme_compliance.o 00:04:45.303 CXX test/cpp_headers/gpt_spec.o 00:04:45.562 CXX test/cpp_headers/hexlify.o 00:04:45.562 LINK connect_stress 00:04:45.562 LINK boot_partition 00:04:45.562 CC test/nvme/fused_ordering/fused_ordering.o 00:04:45.562 CXX test/cpp_headers/histogram_data.o 00:04:45.562 CC test/nvme/doorbell_aers/doorbell_aers.o 00:04:45.562 CC test/nvme/fdp/fdp.o 00:04:45.562 CXX test/cpp_headers/idxd.o 00:04:45.562 CXX test/cpp_headers/idxd_spec.o 00:04:45.821 CXX test/cpp_headers/init.o 00:04:45.821 CC test/nvme/cuse/cuse.o 00:04:45.821 LINK fused_ordering 00:04:45.821 LINK nvme_compliance 00:04:45.821 CXX test/cpp_headers/ioat.o 00:04:45.821 CXX test/cpp_headers/ioat_spec.o 00:04:45.821 LINK doorbell_aers 00:04:45.821 CXX test/cpp_headers/iscsi_spec.o 00:04:45.821 CXX test/cpp_headers/json.o 00:04:45.821 CXX test/cpp_headers/jsonrpc.o 00:04:45.821 CXX test/cpp_headers/keyring.o 00:04:45.821 CXX test/cpp_headers/keyring_module.o 00:04:45.821 CXX test/cpp_headers/likely.o 00:04:45.821 LINK fdp 00:04:46.079 CXX test/cpp_headers/log.o 00:04:46.079 CXX test/cpp_headers/lvol.o 00:04:46.079 CXX test/cpp_headers/md5.o 00:04:46.079 CXX test/cpp_headers/memory.o 00:04:46.079 CXX test/cpp_headers/mmio.o 00:04:46.079 CXX test/cpp_headers/nbd.o 00:04:46.079 LINK bdevperf 00:04:46.079 CXX test/cpp_headers/net.o 00:04:46.079 CXX test/cpp_headers/notify.o 00:04:46.079 CXX test/cpp_headers/nvme.o 00:04:46.079 CXX test/cpp_headers/nvme_intel.o 00:04:46.079 CXX test/cpp_headers/nvme_ocssd.o 00:04:46.079 CXX test/cpp_headers/nvme_ocssd_spec.o 00:04:46.079 CXX test/cpp_headers/nvme_spec.o 00:04:46.079 CXX test/cpp_headers/nvme_zns.o 00:04:46.337 CXX test/cpp_headers/nvmf_cmd.o 00:04:46.337 CXX test/cpp_headers/nvmf_fc_spec.o 00:04:46.337 CXX test/cpp_headers/nvmf.o 00:04:46.337 CXX test/cpp_headers/nvmf_spec.o 00:04:46.337 CXX test/cpp_headers/nvmf_transport.o 00:04:46.337 CXX test/cpp_headers/opal.o 00:04:46.337 CXX test/cpp_headers/opal_spec.o 00:04:46.337 CXX test/cpp_headers/pci_ids.o 00:04:46.337 CC examples/nvmf/nvmf/nvmf.o 00:04:46.337 CXX test/cpp_headers/pipe.o 00:04:46.337 CXX test/cpp_headers/queue.o 00:04:46.337 CXX test/cpp_headers/reduce.o 00:04:46.596 CXX test/cpp_headers/rpc.o 00:04:46.596 CXX test/cpp_headers/scheduler.o 00:04:46.596 CXX test/cpp_headers/scsi.o 00:04:46.596 CXX test/cpp_headers/scsi_spec.o 00:04:46.596 CXX test/cpp_headers/sock.o 00:04:46.596 CXX test/cpp_headers/stdinc.o 00:04:46.596 CXX test/cpp_headers/string.o 00:04:46.596 CXX test/cpp_headers/thread.o 00:04:46.596 CXX test/cpp_headers/trace.o 00:04:46.596 CXX test/cpp_headers/trace_parser.o 00:04:46.596 CXX test/cpp_headers/tree.o 00:04:46.596 CXX test/cpp_headers/ublk.o 00:04:46.596 LINK nvmf 00:04:46.596 CXX test/cpp_headers/util.o 00:04:46.854 CXX test/cpp_headers/uuid.o 00:04:46.854 CXX test/cpp_headers/version.o 00:04:46.854 CXX test/cpp_headers/vfio_user_pci.o 00:04:46.854 CXX test/cpp_headers/vfio_user_spec.o 00:04:46.854 CXX test/cpp_headers/vhost.o 00:04:46.854 CXX test/cpp_headers/vmd.o 00:04:46.854 CXX test/cpp_headers/xor.o 00:04:46.854 CXX test/cpp_headers/zipf.o 00:04:46.854 LINK cuse 00:04:48.235 LINK esnap 00:04:48.804 00:04:48.804 real 1m14.090s 00:04:48.804 user 5m43.278s 00:04:48.804 sys 1m12.762s 00:04:48.804 23:22:28 make -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:04:48.804 23:22:28 make -- common/autotest_common.sh@10 -- $ set +x 00:04:48.804 ************************************ 00:04:48.804 END TEST make 00:04:48.804 ************************************ 00:04:48.804 23:22:28 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:04:48.804 23:22:28 -- pm/common@29 -- $ signal_monitor_resources TERM 00:04:48.804 23:22:28 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:04:48.804 23:22:28 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:48.804 23:22:28 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:04:48.804 23:22:28 -- pm/common@44 -- $ pid=6194 00:04:48.804 23:22:28 -- pm/common@50 -- $ kill -TERM 6194 00:04:48.804 23:22:28 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:48.804 23:22:28 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:04:48.804 23:22:28 -- pm/common@44 -- $ pid=6196 00:04:48.804 23:22:28 -- pm/common@50 -- $ kill -TERM 6196 00:04:48.804 23:22:28 -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:04:48.804 23:22:28 -- common/autotest_common.sh@1681 -- # lcov --version 00:04:48.804 23:22:28 -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:04:49.064 23:22:28 -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:04:49.064 23:22:28 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:49.064 23:22:28 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:49.064 23:22:28 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:49.064 23:22:28 -- scripts/common.sh@336 -- # IFS=.-: 00:04:49.064 23:22:28 -- scripts/common.sh@336 -- # read -ra ver1 00:04:49.064 23:22:28 -- scripts/common.sh@337 -- # IFS=.-: 00:04:49.064 23:22:28 -- scripts/common.sh@337 -- # read -ra ver2 00:04:49.064 23:22:28 -- scripts/common.sh@338 -- # local 'op=<' 00:04:49.064 23:22:28 -- scripts/common.sh@340 -- # ver1_l=2 00:04:49.064 23:22:28 -- scripts/common.sh@341 -- # ver2_l=1 00:04:49.064 23:22:28 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:49.064 23:22:28 -- scripts/common.sh@344 -- # case "$op" in 00:04:49.064 23:22:28 -- scripts/common.sh@345 -- # : 1 00:04:49.064 23:22:28 -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:49.064 23:22:28 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:49.064 23:22:28 -- scripts/common.sh@365 -- # decimal 1 00:04:49.064 23:22:28 -- scripts/common.sh@353 -- # local d=1 00:04:49.064 23:22:28 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:49.064 23:22:28 -- scripts/common.sh@355 -- # echo 1 00:04:49.064 23:22:28 -- scripts/common.sh@365 -- # ver1[v]=1 00:04:49.064 23:22:28 -- scripts/common.sh@366 -- # decimal 2 00:04:49.064 23:22:28 -- scripts/common.sh@353 -- # local d=2 00:04:49.064 23:22:28 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:49.064 23:22:28 -- scripts/common.sh@355 -- # echo 2 00:04:49.064 23:22:28 -- scripts/common.sh@366 -- # ver2[v]=2 00:04:49.064 23:22:28 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:49.064 23:22:28 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:49.064 23:22:28 -- scripts/common.sh@368 -- # return 0 00:04:49.064 23:22:28 -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:49.064 23:22:28 -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:04:49.064 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:49.064 --rc genhtml_branch_coverage=1 00:04:49.064 --rc genhtml_function_coverage=1 00:04:49.064 --rc genhtml_legend=1 00:04:49.064 --rc geninfo_all_blocks=1 00:04:49.064 --rc geninfo_unexecuted_blocks=1 00:04:49.064 00:04:49.064 ' 00:04:49.064 23:22:28 -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:04:49.064 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:49.064 --rc genhtml_branch_coverage=1 00:04:49.064 --rc genhtml_function_coverage=1 00:04:49.064 --rc genhtml_legend=1 00:04:49.064 --rc geninfo_all_blocks=1 00:04:49.064 --rc geninfo_unexecuted_blocks=1 00:04:49.064 00:04:49.064 ' 00:04:49.064 23:22:28 -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:04:49.064 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:49.064 --rc genhtml_branch_coverage=1 00:04:49.064 --rc genhtml_function_coverage=1 00:04:49.064 --rc genhtml_legend=1 00:04:49.064 --rc geninfo_all_blocks=1 00:04:49.064 --rc geninfo_unexecuted_blocks=1 00:04:49.064 00:04:49.064 ' 00:04:49.064 23:22:28 -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:04:49.064 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:49.064 --rc genhtml_branch_coverage=1 00:04:49.064 --rc genhtml_function_coverage=1 00:04:49.064 --rc genhtml_legend=1 00:04:49.064 --rc geninfo_all_blocks=1 00:04:49.064 --rc geninfo_unexecuted_blocks=1 00:04:49.064 00:04:49.064 ' 00:04:49.064 23:22:28 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:49.064 23:22:28 -- nvmf/common.sh@7 -- # uname -s 00:04:49.064 23:22:28 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:49.064 23:22:28 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:49.064 23:22:28 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:49.064 23:22:28 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:49.064 23:22:28 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:49.064 23:22:28 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:49.064 23:22:28 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:49.064 23:22:28 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:49.064 23:22:28 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:49.064 23:22:28 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:49.064 23:22:28 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:32f16825-d8ef-4474-a5f6-58ecfae20c36 00:04:49.064 23:22:28 -- nvmf/common.sh@18 -- # NVME_HOSTID=32f16825-d8ef-4474-a5f6-58ecfae20c36 00:04:49.064 23:22:28 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:49.064 23:22:28 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:49.064 23:22:28 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:49.064 23:22:28 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:49.064 23:22:28 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:49.064 23:22:28 -- scripts/common.sh@15 -- # shopt -s extglob 00:04:49.064 23:22:28 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:49.064 23:22:28 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:49.064 23:22:28 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:49.064 23:22:28 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:49.064 23:22:28 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:49.065 23:22:28 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:49.065 23:22:28 -- paths/export.sh@5 -- # export PATH 00:04:49.065 23:22:28 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:49.065 23:22:28 -- nvmf/common.sh@51 -- # : 0 00:04:49.065 23:22:28 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:49.065 23:22:28 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:49.065 23:22:28 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:49.065 23:22:28 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:49.065 23:22:28 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:49.065 23:22:28 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:49.065 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:49.065 23:22:28 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:49.065 23:22:28 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:49.065 23:22:28 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:49.065 23:22:28 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:04:49.065 23:22:28 -- spdk/autotest.sh@32 -- # uname -s 00:04:49.065 23:22:28 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:04:49.065 23:22:28 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:04:49.065 23:22:28 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:49.065 23:22:28 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:04:49.065 23:22:28 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:49.065 23:22:28 -- spdk/autotest.sh@44 -- # modprobe nbd 00:04:49.065 23:22:28 -- spdk/autotest.sh@46 -- # type -P udevadm 00:04:49.065 23:22:28 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:04:49.065 23:22:28 -- spdk/autotest.sh@48 -- # udevadm_pid=66777 00:04:49.065 23:22:28 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:04:49.065 23:22:28 -- pm/common@17 -- # local monitor 00:04:49.065 23:22:28 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:49.065 23:22:28 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:04:49.065 23:22:28 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:49.065 23:22:28 -- pm/common@25 -- # sleep 1 00:04:49.065 23:22:28 -- pm/common@21 -- # date +%s 00:04:49.065 23:22:28 -- pm/common@21 -- # date +%s 00:04:49.065 23:22:28 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1727738548 00:04:49.065 23:22:28 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1727738548 00:04:49.065 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1727738548_collect-cpu-load.pm.log 00:04:49.065 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1727738548_collect-vmstat.pm.log 00:04:50.008 23:22:29 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:04:50.008 23:22:29 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:04:50.008 23:22:29 -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:50.008 23:22:29 -- common/autotest_common.sh@10 -- # set +x 00:04:50.008 23:22:29 -- spdk/autotest.sh@59 -- # create_test_list 00:04:50.008 23:22:29 -- common/autotest_common.sh@748 -- # xtrace_disable 00:04:50.008 23:22:29 -- common/autotest_common.sh@10 -- # set +x 00:04:50.278 23:22:29 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:04:50.279 23:22:29 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:04:50.279 23:22:29 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:04:50.279 23:22:29 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:04:50.279 23:22:29 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:04:50.279 23:22:29 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:04:50.279 23:22:29 -- common/autotest_common.sh@1455 -- # uname 00:04:50.279 23:22:29 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:04:50.279 23:22:29 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:04:50.279 23:22:29 -- common/autotest_common.sh@1475 -- # uname 00:04:50.279 23:22:29 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:04:50.279 23:22:29 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:04:50.279 23:22:29 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:04:50.279 lcov: LCOV version 1.15 00:04:50.279 23:22:30 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:05:05.230 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:05:05.230 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:05:20.131 23:22:57 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:05:20.131 23:22:57 -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:20.131 23:22:57 -- common/autotest_common.sh@10 -- # set +x 00:05:20.131 23:22:57 -- spdk/autotest.sh@78 -- # rm -f 00:05:20.131 23:22:57 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:20.131 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:20.131 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:05:20.131 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:05:20.131 23:22:58 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:05:20.131 23:22:58 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:05:20.131 23:22:58 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:05:20.131 23:22:58 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:05:20.131 23:22:58 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:05:20.131 23:22:58 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:05:20.131 23:22:58 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:05:20.131 23:22:58 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:20.131 23:22:58 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:05:20.131 23:22:58 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:05:20.131 23:22:58 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n1 00:05:20.131 23:22:58 -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:05:20.131 23:22:58 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:05:20.131 23:22:58 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:05:20.131 23:22:58 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:05:20.131 23:22:58 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n2 00:05:20.131 23:22:58 -- common/autotest_common.sh@1648 -- # local device=nvme1n2 00:05:20.131 23:22:58 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:05:20.131 23:22:58 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:05:20.131 23:22:58 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:05:20.131 23:22:58 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n3 00:05:20.131 23:22:58 -- common/autotest_common.sh@1648 -- # local device=nvme1n3 00:05:20.131 23:22:58 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:05:20.131 23:22:58 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:05:20.131 23:22:58 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:05:20.131 23:22:58 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:20.131 23:22:58 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:20.131 23:22:58 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:05:20.131 23:22:58 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:05:20.131 23:22:58 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:05:20.131 No valid GPT data, bailing 00:05:20.131 23:22:58 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:05:20.131 23:22:58 -- scripts/common.sh@394 -- # pt= 00:05:20.131 23:22:58 -- scripts/common.sh@395 -- # return 1 00:05:20.131 23:22:58 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:05:20.131 1+0 records in 00:05:20.131 1+0 records out 00:05:20.131 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00662876 s, 158 MB/s 00:05:20.131 23:22:58 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:20.131 23:22:58 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:20.131 23:22:58 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:05:20.131 23:22:58 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:05:20.131 23:22:58 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:05:20.131 No valid GPT data, bailing 00:05:20.131 23:22:58 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:05:20.131 23:22:58 -- scripts/common.sh@394 -- # pt= 00:05:20.131 23:22:58 -- scripts/common.sh@395 -- # return 1 00:05:20.131 23:22:58 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:05:20.131 1+0 records in 00:05:20.131 1+0 records out 00:05:20.131 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00668022 s, 157 MB/s 00:05:20.131 23:22:58 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:20.131 23:22:58 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:20.131 23:22:58 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n2 00:05:20.131 23:22:58 -- scripts/common.sh@381 -- # local block=/dev/nvme1n2 pt 00:05:20.131 23:22:58 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:05:20.131 No valid GPT data, bailing 00:05:20.131 23:22:58 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:05:20.131 23:22:58 -- scripts/common.sh@394 -- # pt= 00:05:20.131 23:22:58 -- scripts/common.sh@395 -- # return 1 00:05:20.131 23:22:58 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:05:20.131 1+0 records in 00:05:20.131 1+0 records out 00:05:20.131 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00455656 s, 230 MB/s 00:05:20.131 23:22:59 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:20.131 23:22:59 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:20.131 23:22:59 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n3 00:05:20.131 23:22:59 -- scripts/common.sh@381 -- # local block=/dev/nvme1n3 pt 00:05:20.131 23:22:59 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:05:20.131 No valid GPT data, bailing 00:05:20.132 23:22:59 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:05:20.132 23:22:59 -- scripts/common.sh@394 -- # pt= 00:05:20.132 23:22:59 -- scripts/common.sh@395 -- # return 1 00:05:20.132 23:22:59 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:05:20.132 1+0 records in 00:05:20.132 1+0 records out 00:05:20.132 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0063852 s, 164 MB/s 00:05:20.132 23:22:59 -- spdk/autotest.sh@105 -- # sync 00:05:20.132 23:22:59 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:05:20.132 23:22:59 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:05:20.132 23:22:59 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:05:22.664 23:23:02 -- spdk/autotest.sh@111 -- # uname -s 00:05:22.664 23:23:02 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:05:22.664 23:23:02 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:05:22.664 23:23:02 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:05:23.231 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:23.231 Hugepages 00:05:23.231 node hugesize free / total 00:05:23.231 node0 1048576kB 0 / 0 00:05:23.231 node0 2048kB 0 / 0 00:05:23.231 00:05:23.231 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:23.488 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:05:23.488 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:05:23.488 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:05:23.488 23:23:03 -- spdk/autotest.sh@117 -- # uname -s 00:05:23.488 23:23:03 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:05:23.488 23:23:03 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:05:23.488 23:23:03 -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:24.422 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:24.682 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:05:24.682 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:05:24.682 23:23:04 -- common/autotest_common.sh@1515 -- # sleep 1 00:05:25.643 23:23:05 -- common/autotest_common.sh@1516 -- # bdfs=() 00:05:25.643 23:23:05 -- common/autotest_common.sh@1516 -- # local bdfs 00:05:25.643 23:23:05 -- common/autotest_common.sh@1518 -- # bdfs=($(get_nvme_bdfs)) 00:05:25.643 23:23:05 -- common/autotest_common.sh@1518 -- # get_nvme_bdfs 00:05:25.643 23:23:05 -- common/autotest_common.sh@1496 -- # bdfs=() 00:05:25.643 23:23:05 -- common/autotest_common.sh@1496 -- # local bdfs 00:05:25.643 23:23:05 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:25.643 23:23:05 -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:25.643 23:23:05 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:05:25.643 23:23:05 -- common/autotest_common.sh@1498 -- # (( 2 == 0 )) 00:05:25.643 23:23:05 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:05:25.643 23:23:05 -- common/autotest_common.sh@1520 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:26.212 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:26.212 Waiting for block devices as requested 00:05:26.471 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:05:26.471 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:05:26.471 23:23:06 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:05:26.471 23:23:06 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:05:26.471 23:23:06 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:05:26.471 23:23:06 -- common/autotest_common.sh@1485 -- # grep 0000:00:10.0/nvme/nvme 00:05:26.471 23:23:06 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:05:26.471 23:23:06 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:05:26.471 23:23:06 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:05:26.471 23:23:06 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme1 00:05:26.471 23:23:06 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme1 00:05:26.471 23:23:06 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme1 ]] 00:05:26.471 23:23:06 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme1 00:05:26.471 23:23:06 -- common/autotest_common.sh@1529 -- # grep oacs 00:05:26.471 23:23:06 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:05:26.471 23:23:06 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:05:26.471 23:23:06 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:05:26.471 23:23:06 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:05:26.471 23:23:06 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme1 00:05:26.471 23:23:06 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:05:26.471 23:23:06 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:05:26.471 23:23:06 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:05:26.471 23:23:06 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:05:26.471 23:23:06 -- common/autotest_common.sh@1541 -- # continue 00:05:26.471 23:23:06 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:05:26.471 23:23:06 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:05:26.471 23:23:06 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:05:26.471 23:23:06 -- common/autotest_common.sh@1485 -- # grep 0000:00:11.0/nvme/nvme 00:05:26.471 23:23:06 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:05:26.471 23:23:06 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:05:26.471 23:23:06 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:05:26.471 23:23:06 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme0 00:05:26.471 23:23:06 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme0 00:05:26.471 23:23:06 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme0 ]] 00:05:26.471 23:23:06 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme0 00:05:26.471 23:23:06 -- common/autotest_common.sh@1529 -- # grep oacs 00:05:26.471 23:23:06 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:05:26.729 23:23:06 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:05:26.729 23:23:06 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:05:26.729 23:23:06 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:05:26.729 23:23:06 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme0 00:05:26.729 23:23:06 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:05:26.729 23:23:06 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:05:26.729 23:23:06 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:05:26.729 23:23:06 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:05:26.729 23:23:06 -- common/autotest_common.sh@1541 -- # continue 00:05:26.729 23:23:06 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:05:26.729 23:23:06 -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:26.729 23:23:06 -- common/autotest_common.sh@10 -- # set +x 00:05:26.729 23:23:06 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:05:26.729 23:23:06 -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:26.729 23:23:06 -- common/autotest_common.sh@10 -- # set +x 00:05:26.729 23:23:06 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:27.683 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:27.683 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:05:27.683 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:05:27.683 23:23:07 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:05:27.683 23:23:07 -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:27.683 23:23:07 -- common/autotest_common.sh@10 -- # set +x 00:05:27.683 23:23:07 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:05:27.683 23:23:07 -- common/autotest_common.sh@1576 -- # mapfile -t bdfs 00:05:27.683 23:23:07 -- common/autotest_common.sh@1576 -- # get_nvme_bdfs_by_id 0x0a54 00:05:27.683 23:23:07 -- common/autotest_common.sh@1561 -- # bdfs=() 00:05:27.683 23:23:07 -- common/autotest_common.sh@1561 -- # _bdfs=() 00:05:27.683 23:23:07 -- common/autotest_common.sh@1561 -- # local bdfs _bdfs 00:05:27.683 23:23:07 -- common/autotest_common.sh@1562 -- # _bdfs=($(get_nvme_bdfs)) 00:05:27.961 23:23:07 -- common/autotest_common.sh@1562 -- # get_nvme_bdfs 00:05:27.961 23:23:07 -- common/autotest_common.sh@1496 -- # bdfs=() 00:05:27.961 23:23:07 -- common/autotest_common.sh@1496 -- # local bdfs 00:05:27.961 23:23:07 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:27.961 23:23:07 -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:27.961 23:23:07 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:05:27.961 23:23:07 -- common/autotest_common.sh@1498 -- # (( 2 == 0 )) 00:05:27.961 23:23:07 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:05:27.961 23:23:07 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:05:27.961 23:23:07 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:05:27.961 23:23:07 -- common/autotest_common.sh@1564 -- # device=0x0010 00:05:27.961 23:23:07 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:27.961 23:23:07 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:05:27.961 23:23:07 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:05:27.961 23:23:07 -- common/autotest_common.sh@1564 -- # device=0x0010 00:05:27.961 23:23:07 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:27.961 23:23:07 -- common/autotest_common.sh@1570 -- # (( 0 > 0 )) 00:05:27.961 23:23:07 -- common/autotest_common.sh@1570 -- # return 0 00:05:27.961 23:23:07 -- common/autotest_common.sh@1577 -- # [[ -z '' ]] 00:05:27.961 23:23:07 -- common/autotest_common.sh@1578 -- # return 0 00:05:27.961 23:23:07 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:05:27.961 23:23:07 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:05:27.961 23:23:07 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:05:27.961 23:23:07 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:05:27.961 23:23:07 -- spdk/autotest.sh@149 -- # timing_enter lib 00:05:27.961 23:23:07 -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:27.961 23:23:07 -- common/autotest_common.sh@10 -- # set +x 00:05:27.961 23:23:07 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:05:27.961 23:23:07 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:05:27.961 23:23:07 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:27.961 23:23:07 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:27.961 23:23:07 -- common/autotest_common.sh@10 -- # set +x 00:05:27.961 ************************************ 00:05:27.961 START TEST env 00:05:27.961 ************************************ 00:05:27.961 23:23:07 env -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:05:27.961 * Looking for test storage... 00:05:27.961 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:05:27.961 23:23:07 env -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:27.961 23:23:07 env -- common/autotest_common.sh@1681 -- # lcov --version 00:05:27.961 23:23:07 env -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:28.220 23:23:07 env -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:28.220 23:23:07 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:28.220 23:23:07 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:28.220 23:23:07 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:28.221 23:23:07 env -- scripts/common.sh@336 -- # IFS=.-: 00:05:28.221 23:23:07 env -- scripts/common.sh@336 -- # read -ra ver1 00:05:28.221 23:23:07 env -- scripts/common.sh@337 -- # IFS=.-: 00:05:28.221 23:23:07 env -- scripts/common.sh@337 -- # read -ra ver2 00:05:28.221 23:23:07 env -- scripts/common.sh@338 -- # local 'op=<' 00:05:28.221 23:23:07 env -- scripts/common.sh@340 -- # ver1_l=2 00:05:28.221 23:23:07 env -- scripts/common.sh@341 -- # ver2_l=1 00:05:28.221 23:23:07 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:28.221 23:23:07 env -- scripts/common.sh@344 -- # case "$op" in 00:05:28.221 23:23:07 env -- scripts/common.sh@345 -- # : 1 00:05:28.221 23:23:07 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:28.221 23:23:07 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:28.221 23:23:07 env -- scripts/common.sh@365 -- # decimal 1 00:05:28.221 23:23:07 env -- scripts/common.sh@353 -- # local d=1 00:05:28.221 23:23:07 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:28.221 23:23:07 env -- scripts/common.sh@355 -- # echo 1 00:05:28.221 23:23:07 env -- scripts/common.sh@365 -- # ver1[v]=1 00:05:28.221 23:23:07 env -- scripts/common.sh@366 -- # decimal 2 00:05:28.221 23:23:07 env -- scripts/common.sh@353 -- # local d=2 00:05:28.221 23:23:07 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:28.221 23:23:07 env -- scripts/common.sh@355 -- # echo 2 00:05:28.221 23:23:07 env -- scripts/common.sh@366 -- # ver2[v]=2 00:05:28.221 23:23:07 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:28.221 23:23:07 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:28.221 23:23:07 env -- scripts/common.sh@368 -- # return 0 00:05:28.221 23:23:07 env -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:28.221 23:23:07 env -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:28.221 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:28.221 --rc genhtml_branch_coverage=1 00:05:28.221 --rc genhtml_function_coverage=1 00:05:28.221 --rc genhtml_legend=1 00:05:28.221 --rc geninfo_all_blocks=1 00:05:28.221 --rc geninfo_unexecuted_blocks=1 00:05:28.221 00:05:28.221 ' 00:05:28.221 23:23:07 env -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:28.221 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:28.221 --rc genhtml_branch_coverage=1 00:05:28.221 --rc genhtml_function_coverage=1 00:05:28.221 --rc genhtml_legend=1 00:05:28.221 --rc geninfo_all_blocks=1 00:05:28.221 --rc geninfo_unexecuted_blocks=1 00:05:28.221 00:05:28.221 ' 00:05:28.221 23:23:07 env -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:28.221 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:28.221 --rc genhtml_branch_coverage=1 00:05:28.221 --rc genhtml_function_coverage=1 00:05:28.221 --rc genhtml_legend=1 00:05:28.221 --rc geninfo_all_blocks=1 00:05:28.221 --rc geninfo_unexecuted_blocks=1 00:05:28.221 00:05:28.221 ' 00:05:28.221 23:23:07 env -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:28.221 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:28.221 --rc genhtml_branch_coverage=1 00:05:28.221 --rc genhtml_function_coverage=1 00:05:28.221 --rc genhtml_legend=1 00:05:28.221 --rc geninfo_all_blocks=1 00:05:28.221 --rc geninfo_unexecuted_blocks=1 00:05:28.221 00:05:28.221 ' 00:05:28.221 23:23:07 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:05:28.221 23:23:07 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:28.221 23:23:07 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:28.221 23:23:07 env -- common/autotest_common.sh@10 -- # set +x 00:05:28.221 ************************************ 00:05:28.221 START TEST env_memory 00:05:28.221 ************************************ 00:05:28.221 23:23:07 env.env_memory -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:05:28.221 00:05:28.221 00:05:28.221 CUnit - A unit testing framework for C - Version 2.1-3 00:05:28.221 http://cunit.sourceforge.net/ 00:05:28.221 00:05:28.221 00:05:28.221 Suite: memory 00:05:28.221 Test: alloc and free memory map ...[2024-09-30 23:23:07.986528] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:28.221 passed 00:05:28.221 Test: mem map translation ...[2024-09-30 23:23:08.029950] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:28.221 [2024-09-30 23:23:08.030002] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:28.221 [2024-09-30 23:23:08.030066] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:28.221 [2024-09-30 23:23:08.030087] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:28.480 passed 00:05:28.480 Test: mem map registration ...[2024-09-30 23:23:08.099697] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:05:28.480 [2024-09-30 23:23:08.099811] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:05:28.480 passed 00:05:28.480 Test: mem map adjacent registrations ...passed 00:05:28.480 00:05:28.480 Run Summary: Type Total Ran Passed Failed Inactive 00:05:28.480 suites 1 1 n/a 0 0 00:05:28.480 tests 4 4 4 0 0 00:05:28.480 asserts 152 152 152 0 n/a 00:05:28.480 00:05:28.480 Elapsed time = 0.243 seconds 00:05:28.480 00:05:28.480 real 0m0.305s 00:05:28.480 user 0m0.255s 00:05:28.480 sys 0m0.037s 00:05:28.480 23:23:08 env.env_memory -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:28.480 23:23:08 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:05:28.480 ************************************ 00:05:28.480 END TEST env_memory 00:05:28.480 ************************************ 00:05:28.480 23:23:08 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:05:28.480 23:23:08 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:28.480 23:23:08 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:28.480 23:23:08 env -- common/autotest_common.sh@10 -- # set +x 00:05:28.480 ************************************ 00:05:28.480 START TEST env_vtophys 00:05:28.480 ************************************ 00:05:28.480 23:23:08 env.env_vtophys -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:05:28.480 EAL: lib.eal log level changed from notice to debug 00:05:28.480 EAL: Detected lcore 0 as core 0 on socket 0 00:05:28.480 EAL: Detected lcore 1 as core 0 on socket 0 00:05:28.480 EAL: Detected lcore 2 as core 0 on socket 0 00:05:28.480 EAL: Detected lcore 3 as core 0 on socket 0 00:05:28.480 EAL: Detected lcore 4 as core 0 on socket 0 00:05:28.480 EAL: Detected lcore 5 as core 0 on socket 0 00:05:28.480 EAL: Detected lcore 6 as core 0 on socket 0 00:05:28.480 EAL: Detected lcore 7 as core 0 on socket 0 00:05:28.480 EAL: Detected lcore 8 as core 0 on socket 0 00:05:28.480 EAL: Detected lcore 9 as core 0 on socket 0 00:05:28.480 EAL: Maximum logical cores by configuration: 128 00:05:28.480 EAL: Detected CPU lcores: 10 00:05:28.480 EAL: Detected NUMA nodes: 1 00:05:28.480 EAL: Checking presence of .so 'librte_eal.so.24.0' 00:05:28.481 EAL: Detected shared linkage of DPDK 00:05:28.481 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so.24.0 00:05:28.481 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so.24.0 00:05:28.481 EAL: Registered [vdev] bus. 00:05:28.481 EAL: bus.vdev log level changed from disabled to notice 00:05:28.481 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so.24.0 00:05:28.481 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so.24.0 00:05:28.481 EAL: pmd.net.i40e.init log level changed from disabled to notice 00:05:28.481 EAL: pmd.net.i40e.driver log level changed from disabled to notice 00:05:28.481 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so 00:05:28.481 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so 00:05:28.481 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so 00:05:28.481 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so 00:05:28.739 EAL: No shared files mode enabled, IPC will be disabled 00:05:28.740 EAL: No shared files mode enabled, IPC is disabled 00:05:28.740 EAL: Selected IOVA mode 'PA' 00:05:28.740 EAL: Probing VFIO support... 00:05:28.740 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:05:28.740 EAL: VFIO modules not loaded, skipping VFIO support... 00:05:28.740 EAL: Ask a virtual area of 0x2e000 bytes 00:05:28.740 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:28.740 EAL: Setting up physically contiguous memory... 00:05:28.740 EAL: Setting maximum number of open files to 524288 00:05:28.740 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:28.740 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:28.740 EAL: Ask a virtual area of 0x61000 bytes 00:05:28.740 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:28.740 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:28.740 EAL: Ask a virtual area of 0x400000000 bytes 00:05:28.740 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:28.740 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:28.740 EAL: Ask a virtual area of 0x61000 bytes 00:05:28.740 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:28.740 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:28.740 EAL: Ask a virtual area of 0x400000000 bytes 00:05:28.740 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:28.740 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:28.740 EAL: Ask a virtual area of 0x61000 bytes 00:05:28.740 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:28.740 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:28.740 EAL: Ask a virtual area of 0x400000000 bytes 00:05:28.740 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:28.740 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:28.740 EAL: Ask a virtual area of 0x61000 bytes 00:05:28.740 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:28.740 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:28.740 EAL: Ask a virtual area of 0x400000000 bytes 00:05:28.740 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:28.740 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:28.740 EAL: Hugepages will be freed exactly as allocated. 00:05:28.740 EAL: No shared files mode enabled, IPC is disabled 00:05:28.740 EAL: No shared files mode enabled, IPC is disabled 00:05:28.740 EAL: TSC frequency is ~2290000 KHz 00:05:28.740 EAL: Main lcore 0 is ready (tid=7f0594905a40;cpuset=[0]) 00:05:28.740 EAL: Trying to obtain current memory policy. 00:05:28.740 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:28.740 EAL: Restoring previous memory policy: 0 00:05:28.740 EAL: request: mp_malloc_sync 00:05:28.740 EAL: No shared files mode enabled, IPC is disabled 00:05:28.740 EAL: Heap on socket 0 was expanded by 2MB 00:05:28.740 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:05:28.740 EAL: No shared files mode enabled, IPC is disabled 00:05:28.740 EAL: No PCI address specified using 'addr=' in: bus=pci 00:05:28.740 EAL: Mem event callback 'spdk:(nil)' registered 00:05:28.740 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:05:28.740 00:05:28.740 00:05:28.740 CUnit - A unit testing framework for C - Version 2.1-3 00:05:28.740 http://cunit.sourceforge.net/ 00:05:28.740 00:05:28.740 00:05:28.740 Suite: components_suite 00:05:28.999 Test: vtophys_malloc_test ...passed 00:05:28.999 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:28.999 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:28.999 EAL: Restoring previous memory policy: 4 00:05:28.999 EAL: Calling mem event callback 'spdk:(nil)' 00:05:28.999 EAL: request: mp_malloc_sync 00:05:28.999 EAL: No shared files mode enabled, IPC is disabled 00:05:28.999 EAL: Heap on socket 0 was expanded by 4MB 00:05:28.999 EAL: Calling mem event callback 'spdk:(nil)' 00:05:28.999 EAL: request: mp_malloc_sync 00:05:28.999 EAL: No shared files mode enabled, IPC is disabled 00:05:28.999 EAL: Heap on socket 0 was shrunk by 4MB 00:05:28.999 EAL: Trying to obtain current memory policy. 00:05:28.999 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:28.999 EAL: Restoring previous memory policy: 4 00:05:28.999 EAL: Calling mem event callback 'spdk:(nil)' 00:05:28.999 EAL: request: mp_malloc_sync 00:05:28.999 EAL: No shared files mode enabled, IPC is disabled 00:05:28.999 EAL: Heap on socket 0 was expanded by 6MB 00:05:28.999 EAL: Calling mem event callback 'spdk:(nil)' 00:05:28.999 EAL: request: mp_malloc_sync 00:05:28.999 EAL: No shared files mode enabled, IPC is disabled 00:05:28.999 EAL: Heap on socket 0 was shrunk by 6MB 00:05:28.999 EAL: Trying to obtain current memory policy. 00:05:28.999 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:28.999 EAL: Restoring previous memory policy: 4 00:05:28.999 EAL: Calling mem event callback 'spdk:(nil)' 00:05:28.999 EAL: request: mp_malloc_sync 00:05:28.999 EAL: No shared files mode enabled, IPC is disabled 00:05:28.999 EAL: Heap on socket 0 was expanded by 10MB 00:05:28.999 EAL: Calling mem event callback 'spdk:(nil)' 00:05:28.999 EAL: request: mp_malloc_sync 00:05:28.999 EAL: No shared files mode enabled, IPC is disabled 00:05:28.999 EAL: Heap on socket 0 was shrunk by 10MB 00:05:28.999 EAL: Trying to obtain current memory policy. 00:05:28.999 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:28.999 EAL: Restoring previous memory policy: 4 00:05:28.999 EAL: Calling mem event callback 'spdk:(nil)' 00:05:28.999 EAL: request: mp_malloc_sync 00:05:28.999 EAL: No shared files mode enabled, IPC is disabled 00:05:28.999 EAL: Heap on socket 0 was expanded by 18MB 00:05:28.999 EAL: Calling mem event callback 'spdk:(nil)' 00:05:28.999 EAL: request: mp_malloc_sync 00:05:28.999 EAL: No shared files mode enabled, IPC is disabled 00:05:28.999 EAL: Heap on socket 0 was shrunk by 18MB 00:05:28.999 EAL: Trying to obtain current memory policy. 00:05:28.999 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:28.999 EAL: Restoring previous memory policy: 4 00:05:28.999 EAL: Calling mem event callback 'spdk:(nil)' 00:05:28.999 EAL: request: mp_malloc_sync 00:05:28.999 EAL: No shared files mode enabled, IPC is disabled 00:05:28.999 EAL: Heap on socket 0 was expanded by 34MB 00:05:28.999 EAL: Calling mem event callback 'spdk:(nil)' 00:05:29.259 EAL: request: mp_malloc_sync 00:05:29.259 EAL: No shared files mode enabled, IPC is disabled 00:05:29.259 EAL: Heap on socket 0 was shrunk by 34MB 00:05:29.259 EAL: Trying to obtain current memory policy. 00:05:29.259 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:29.259 EAL: Restoring previous memory policy: 4 00:05:29.259 EAL: Calling mem event callback 'spdk:(nil)' 00:05:29.259 EAL: request: mp_malloc_sync 00:05:29.259 EAL: No shared files mode enabled, IPC is disabled 00:05:29.259 EAL: Heap on socket 0 was expanded by 66MB 00:05:29.259 EAL: Calling mem event callback 'spdk:(nil)' 00:05:29.259 EAL: request: mp_malloc_sync 00:05:29.259 EAL: No shared files mode enabled, IPC is disabled 00:05:29.259 EAL: Heap on socket 0 was shrunk by 66MB 00:05:29.259 EAL: Trying to obtain current memory policy. 00:05:29.259 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:29.259 EAL: Restoring previous memory policy: 4 00:05:29.259 EAL: Calling mem event callback 'spdk:(nil)' 00:05:29.259 EAL: request: mp_malloc_sync 00:05:29.259 EAL: No shared files mode enabled, IPC is disabled 00:05:29.259 EAL: Heap on socket 0 was expanded by 130MB 00:05:29.259 EAL: Calling mem event callback 'spdk:(nil)' 00:05:29.259 EAL: request: mp_malloc_sync 00:05:29.259 EAL: No shared files mode enabled, IPC is disabled 00:05:29.259 EAL: Heap on socket 0 was shrunk by 130MB 00:05:29.259 EAL: Trying to obtain current memory policy. 00:05:29.259 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:29.259 EAL: Restoring previous memory policy: 4 00:05:29.259 EAL: Calling mem event callback 'spdk:(nil)' 00:05:29.259 EAL: request: mp_malloc_sync 00:05:29.259 EAL: No shared files mode enabled, IPC is disabled 00:05:29.259 EAL: Heap on socket 0 was expanded by 258MB 00:05:29.259 EAL: Calling mem event callback 'spdk:(nil)' 00:05:29.259 EAL: request: mp_malloc_sync 00:05:29.259 EAL: No shared files mode enabled, IPC is disabled 00:05:29.259 EAL: Heap on socket 0 was shrunk by 258MB 00:05:29.259 EAL: Trying to obtain current memory policy. 00:05:29.259 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:29.516 EAL: Restoring previous memory policy: 4 00:05:29.516 EAL: Calling mem event callback 'spdk:(nil)' 00:05:29.516 EAL: request: mp_malloc_sync 00:05:29.516 EAL: No shared files mode enabled, IPC is disabled 00:05:29.516 EAL: Heap on socket 0 was expanded by 514MB 00:05:29.516 EAL: Calling mem event callback 'spdk:(nil)' 00:05:29.516 EAL: request: mp_malloc_sync 00:05:29.516 EAL: No shared files mode enabled, IPC is disabled 00:05:29.516 EAL: Heap on socket 0 was shrunk by 514MB 00:05:29.516 EAL: Trying to obtain current memory policy. 00:05:29.516 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:29.774 EAL: Restoring previous memory policy: 4 00:05:29.774 EAL: Calling mem event callback 'spdk:(nil)' 00:05:29.774 EAL: request: mp_malloc_sync 00:05:29.774 EAL: No shared files mode enabled, IPC is disabled 00:05:29.774 EAL: Heap on socket 0 was expanded by 1026MB 00:05:30.033 EAL: Calling mem event callback 'spdk:(nil)' 00:05:30.033 EAL: request: mp_malloc_sync 00:05:30.033 EAL: No shared files mode enabled, IPC is disabled 00:05:30.033 passed 00:05:30.033 00:05:30.033 Run Summary: Type Total Ran Passed Failed Inactive 00:05:30.033 suites 1 1 n/a 0 0 00:05:30.033 tests 2 2 2 0 0 00:05:30.033 asserts 5358 5358 5358 0 n/a 00:05:30.033 00:05:30.033 Elapsed time = 1.355 seconds 00:05:30.033 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:30.033 EAL: Calling mem event callback 'spdk:(nil)' 00:05:30.033 EAL: request: mp_malloc_sync 00:05:30.033 EAL: No shared files mode enabled, IPC is disabled 00:05:30.033 EAL: Heap on socket 0 was shrunk by 2MB 00:05:30.033 EAL: No shared files mode enabled, IPC is disabled 00:05:30.033 EAL: No shared files mode enabled, IPC is disabled 00:05:30.033 EAL: No shared files mode enabled, IPC is disabled 00:05:30.291 00:05:30.291 real 0m1.614s 00:05:30.291 user 0m0.783s 00:05:30.291 sys 0m0.695s 00:05:30.291 ************************************ 00:05:30.291 END TEST env_vtophys 00:05:30.291 ************************************ 00:05:30.291 23:23:09 env.env_vtophys -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:30.291 23:23:09 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:05:30.291 23:23:09 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:30.291 23:23:09 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:30.291 23:23:09 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:30.291 23:23:09 env -- common/autotest_common.sh@10 -- # set +x 00:05:30.291 ************************************ 00:05:30.291 START TEST env_pci 00:05:30.291 ************************************ 00:05:30.291 23:23:09 env.env_pci -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:30.291 00:05:30.291 00:05:30.291 CUnit - A unit testing framework for C - Version 2.1-3 00:05:30.291 http://cunit.sourceforge.net/ 00:05:30.291 00:05:30.291 00:05:30.291 Suite: pci 00:05:30.291 Test: pci_hook ...[2024-09-30 23:23:09.994071] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1049:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 69024 has claimed it 00:05:30.291 passed 00:05:30.291 00:05:30.291 Run Summary: Type Total Ran Passed Failed Inactive 00:05:30.291 suites 1 1 n/a 0 0 00:05:30.291 tests 1 1 1 0 0 00:05:30.291 asserts 25 25 25 0 n/a 00:05:30.291 00:05:30.291 Elapsed time = 0.006 seconds 00:05:30.291 EAL: Cannot find device (10000:00:01.0) 00:05:30.291 EAL: Failed to attach device on primary process 00:05:30.291 00:05:30.292 real 0m0.097s 00:05:30.292 user 0m0.047s 00:05:30.292 sys 0m0.049s 00:05:30.292 ************************************ 00:05:30.292 END TEST env_pci 00:05:30.292 ************************************ 00:05:30.292 23:23:10 env.env_pci -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:30.292 23:23:10 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:05:30.292 23:23:10 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:30.292 23:23:10 env -- env/env.sh@15 -- # uname 00:05:30.292 23:23:10 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:30.292 23:23:10 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:30.292 23:23:10 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:30.292 23:23:10 env -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:05:30.292 23:23:10 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:30.292 23:23:10 env -- common/autotest_common.sh@10 -- # set +x 00:05:30.292 ************************************ 00:05:30.292 START TEST env_dpdk_post_init 00:05:30.292 ************************************ 00:05:30.292 23:23:10 env.env_dpdk_post_init -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:30.551 EAL: Detected CPU lcores: 10 00:05:30.551 EAL: Detected NUMA nodes: 1 00:05:30.551 EAL: Detected shared linkage of DPDK 00:05:30.551 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:30.551 EAL: Selected IOVA mode 'PA' 00:05:30.551 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:30.551 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:05:30.551 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:05:30.551 Starting DPDK initialization... 00:05:30.551 Starting SPDK post initialization... 00:05:30.551 SPDK NVMe probe 00:05:30.551 Attaching to 0000:00:10.0 00:05:30.551 Attaching to 0000:00:11.0 00:05:30.551 Attached to 0000:00:10.0 00:05:30.551 Attached to 0000:00:11.0 00:05:30.551 Cleaning up... 00:05:30.551 00:05:30.551 real 0m0.256s 00:05:30.551 user 0m0.064s 00:05:30.551 sys 0m0.093s 00:05:30.551 ************************************ 00:05:30.551 END TEST env_dpdk_post_init 00:05:30.551 ************************************ 00:05:30.551 23:23:10 env.env_dpdk_post_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:30.551 23:23:10 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:05:30.810 23:23:10 env -- env/env.sh@26 -- # uname 00:05:30.810 23:23:10 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:30.810 23:23:10 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:30.810 23:23:10 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:30.810 23:23:10 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:30.810 23:23:10 env -- common/autotest_common.sh@10 -- # set +x 00:05:30.810 ************************************ 00:05:30.810 START TEST env_mem_callbacks 00:05:30.810 ************************************ 00:05:30.810 23:23:10 env.env_mem_callbacks -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:30.810 EAL: Detected CPU lcores: 10 00:05:30.810 EAL: Detected NUMA nodes: 1 00:05:30.810 EAL: Detected shared linkage of DPDK 00:05:30.810 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:30.810 EAL: Selected IOVA mode 'PA' 00:05:30.810 00:05:30.810 00:05:30.810 CUnit - A unit testing framework for C - Version 2.1-3 00:05:30.810 http://cunit.sourceforge.net/ 00:05:30.810 00:05:30.810 00:05:30.810 Suite: memory 00:05:30.810 Test: test ... 00:05:30.810 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:30.810 register 0x200000200000 2097152 00:05:30.810 malloc 3145728 00:05:30.810 register 0x200000400000 4194304 00:05:30.810 buf 0x200000500000 len 3145728 PASSED 00:05:30.810 malloc 64 00:05:30.810 buf 0x2000004fff40 len 64 PASSED 00:05:30.810 malloc 4194304 00:05:30.810 register 0x200000800000 6291456 00:05:30.810 buf 0x200000a00000 len 4194304 PASSED 00:05:30.810 free 0x200000500000 3145728 00:05:30.810 free 0x2000004fff40 64 00:05:30.810 unregister 0x200000400000 4194304 PASSED 00:05:30.810 free 0x200000a00000 4194304 00:05:30.810 unregister 0x200000800000 6291456 PASSED 00:05:30.810 malloc 8388608 00:05:30.810 register 0x200000400000 10485760 00:05:30.810 buf 0x200000600000 len 8388608 PASSED 00:05:30.810 free 0x200000600000 8388608 00:05:30.810 unregister 0x200000400000 10485760 PASSED 00:05:30.810 passed 00:05:30.810 00:05:30.810 Run Summary: Type Total Ran Passed Failed Inactive 00:05:30.810 suites 1 1 n/a 0 0 00:05:30.810 tests 1 1 1 0 0 00:05:30.810 asserts 15 15 15 0 n/a 00:05:30.810 00:05:30.810 Elapsed time = 0.012 seconds 00:05:31.068 00:05:31.068 real 0m0.206s 00:05:31.068 user 0m0.039s 00:05:31.068 sys 0m0.065s 00:05:31.068 23:23:10 env.env_mem_callbacks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:31.068 23:23:10 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:05:31.068 ************************************ 00:05:31.068 END TEST env_mem_callbacks 00:05:31.068 ************************************ 00:05:31.068 00:05:31.068 real 0m3.072s 00:05:31.068 user 0m1.430s 00:05:31.068 sys 0m1.309s 00:05:31.068 23:23:10 env -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:31.068 ************************************ 00:05:31.068 END TEST env 00:05:31.068 ************************************ 00:05:31.068 23:23:10 env -- common/autotest_common.sh@10 -- # set +x 00:05:31.068 23:23:10 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:31.068 23:23:10 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:31.068 23:23:10 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:31.068 23:23:10 -- common/autotest_common.sh@10 -- # set +x 00:05:31.068 ************************************ 00:05:31.068 START TEST rpc 00:05:31.068 ************************************ 00:05:31.068 23:23:10 rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:31.068 * Looking for test storage... 00:05:31.327 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:05:31.327 23:23:10 rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:31.327 23:23:10 rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:05:31.327 23:23:10 rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:31.327 23:23:11 rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:31.327 23:23:11 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:31.327 23:23:11 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:31.327 23:23:11 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:31.327 23:23:11 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:31.327 23:23:11 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:31.327 23:23:11 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:31.327 23:23:11 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:31.327 23:23:11 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:31.327 23:23:11 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:31.327 23:23:11 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:31.327 23:23:11 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:31.327 23:23:11 rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:31.327 23:23:11 rpc -- scripts/common.sh@345 -- # : 1 00:05:31.327 23:23:11 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:31.327 23:23:11 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:31.327 23:23:11 rpc -- scripts/common.sh@365 -- # decimal 1 00:05:31.327 23:23:11 rpc -- scripts/common.sh@353 -- # local d=1 00:05:31.327 23:23:11 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:31.327 23:23:11 rpc -- scripts/common.sh@355 -- # echo 1 00:05:31.327 23:23:11 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:31.327 23:23:11 rpc -- scripts/common.sh@366 -- # decimal 2 00:05:31.327 23:23:11 rpc -- scripts/common.sh@353 -- # local d=2 00:05:31.327 23:23:11 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:31.327 23:23:11 rpc -- scripts/common.sh@355 -- # echo 2 00:05:31.327 23:23:11 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:31.327 23:23:11 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:31.327 23:23:11 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:31.327 23:23:11 rpc -- scripts/common.sh@368 -- # return 0 00:05:31.327 23:23:11 rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:31.327 23:23:11 rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:31.327 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:31.327 --rc genhtml_branch_coverage=1 00:05:31.327 --rc genhtml_function_coverage=1 00:05:31.327 --rc genhtml_legend=1 00:05:31.327 --rc geninfo_all_blocks=1 00:05:31.327 --rc geninfo_unexecuted_blocks=1 00:05:31.327 00:05:31.327 ' 00:05:31.327 23:23:11 rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:31.327 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:31.327 --rc genhtml_branch_coverage=1 00:05:31.327 --rc genhtml_function_coverage=1 00:05:31.327 --rc genhtml_legend=1 00:05:31.327 --rc geninfo_all_blocks=1 00:05:31.327 --rc geninfo_unexecuted_blocks=1 00:05:31.327 00:05:31.327 ' 00:05:31.327 23:23:11 rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:31.327 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:31.327 --rc genhtml_branch_coverage=1 00:05:31.327 --rc genhtml_function_coverage=1 00:05:31.327 --rc genhtml_legend=1 00:05:31.327 --rc geninfo_all_blocks=1 00:05:31.327 --rc geninfo_unexecuted_blocks=1 00:05:31.327 00:05:31.327 ' 00:05:31.327 23:23:11 rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:31.327 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:31.327 --rc genhtml_branch_coverage=1 00:05:31.327 --rc genhtml_function_coverage=1 00:05:31.328 --rc genhtml_legend=1 00:05:31.328 --rc geninfo_all_blocks=1 00:05:31.328 --rc geninfo_unexecuted_blocks=1 00:05:31.328 00:05:31.328 ' 00:05:31.328 23:23:11 rpc -- rpc/rpc.sh@65 -- # spdk_pid=69151 00:05:31.328 23:23:11 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:05:31.328 23:23:11 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:31.328 23:23:11 rpc -- rpc/rpc.sh@67 -- # waitforlisten 69151 00:05:31.328 23:23:11 rpc -- common/autotest_common.sh@831 -- # '[' -z 69151 ']' 00:05:31.328 23:23:11 rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:31.328 23:23:11 rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:31.328 23:23:11 rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:31.328 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:31.328 23:23:11 rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:31.328 23:23:11 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:31.328 [2024-09-30 23:23:11.132583] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:05:31.328 [2024-09-30 23:23:11.133083] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69151 ] 00:05:31.586 [2024-09-30 23:23:11.295992] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:31.586 [2024-09-30 23:23:11.348438] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:31.586 [2024-09-30 23:23:11.348510] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 69151' to capture a snapshot of events at runtime. 00:05:31.586 [2024-09-30 23:23:11.348522] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:31.586 [2024-09-30 23:23:11.348530] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:31.586 [2024-09-30 23:23:11.348542] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid69151 for offline analysis/debug. 00:05:31.586 [2024-09-30 23:23:11.348574] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:32.152 23:23:11 rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:32.152 23:23:11 rpc -- common/autotest_common.sh@864 -- # return 0 00:05:32.152 23:23:11 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:32.152 23:23:11 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:32.152 23:23:11 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:32.152 23:23:11 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:32.152 23:23:11 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:32.152 23:23:11 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:32.152 23:23:11 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:32.152 ************************************ 00:05:32.152 START TEST rpc_integrity 00:05:32.152 ************************************ 00:05:32.152 23:23:12 rpc.rpc_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:05:32.411 23:23:12 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:32.411 23:23:12 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:32.411 23:23:12 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:32.411 23:23:12 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:32.411 23:23:12 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:32.411 23:23:12 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:32.411 23:23:12 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:32.411 23:23:12 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:32.411 23:23:12 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:32.411 23:23:12 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:32.411 23:23:12 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:32.411 23:23:12 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:32.411 23:23:12 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:32.411 23:23:12 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:32.411 23:23:12 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:32.411 23:23:12 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:32.411 23:23:12 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:32.411 { 00:05:32.411 "name": "Malloc0", 00:05:32.411 "aliases": [ 00:05:32.411 "dd5cd697-ff27-4027-8b73-65287d816083" 00:05:32.411 ], 00:05:32.411 "product_name": "Malloc disk", 00:05:32.411 "block_size": 512, 00:05:32.411 "num_blocks": 16384, 00:05:32.411 "uuid": "dd5cd697-ff27-4027-8b73-65287d816083", 00:05:32.411 "assigned_rate_limits": { 00:05:32.411 "rw_ios_per_sec": 0, 00:05:32.411 "rw_mbytes_per_sec": 0, 00:05:32.411 "r_mbytes_per_sec": 0, 00:05:32.411 "w_mbytes_per_sec": 0 00:05:32.411 }, 00:05:32.411 "claimed": false, 00:05:32.411 "zoned": false, 00:05:32.411 "supported_io_types": { 00:05:32.411 "read": true, 00:05:32.411 "write": true, 00:05:32.411 "unmap": true, 00:05:32.411 "flush": true, 00:05:32.411 "reset": true, 00:05:32.411 "nvme_admin": false, 00:05:32.411 "nvme_io": false, 00:05:32.411 "nvme_io_md": false, 00:05:32.411 "write_zeroes": true, 00:05:32.411 "zcopy": true, 00:05:32.411 "get_zone_info": false, 00:05:32.411 "zone_management": false, 00:05:32.411 "zone_append": false, 00:05:32.411 "compare": false, 00:05:32.411 "compare_and_write": false, 00:05:32.411 "abort": true, 00:05:32.411 "seek_hole": false, 00:05:32.411 "seek_data": false, 00:05:32.411 "copy": true, 00:05:32.411 "nvme_iov_md": false 00:05:32.411 }, 00:05:32.411 "memory_domains": [ 00:05:32.411 { 00:05:32.411 "dma_device_id": "system", 00:05:32.411 "dma_device_type": 1 00:05:32.411 }, 00:05:32.411 { 00:05:32.411 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:32.411 "dma_device_type": 2 00:05:32.411 } 00:05:32.411 ], 00:05:32.411 "driver_specific": {} 00:05:32.411 } 00:05:32.411 ]' 00:05:32.411 23:23:12 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:32.411 23:23:12 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:32.411 23:23:12 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:32.411 23:23:12 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:32.411 23:23:12 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:32.411 [2024-09-30 23:23:12.157073] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:32.411 [2024-09-30 23:23:12.157171] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:32.411 [2024-09-30 23:23:12.157227] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:05:32.411 [2024-09-30 23:23:12.157239] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:32.411 [2024-09-30 23:23:12.159734] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:32.411 [2024-09-30 23:23:12.159785] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:32.411 Passthru0 00:05:32.411 23:23:12 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:32.411 23:23:12 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:32.411 23:23:12 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:32.411 23:23:12 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:32.411 23:23:12 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:32.411 23:23:12 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:32.411 { 00:05:32.411 "name": "Malloc0", 00:05:32.411 "aliases": [ 00:05:32.411 "dd5cd697-ff27-4027-8b73-65287d816083" 00:05:32.411 ], 00:05:32.411 "product_name": "Malloc disk", 00:05:32.411 "block_size": 512, 00:05:32.411 "num_blocks": 16384, 00:05:32.411 "uuid": "dd5cd697-ff27-4027-8b73-65287d816083", 00:05:32.411 "assigned_rate_limits": { 00:05:32.411 "rw_ios_per_sec": 0, 00:05:32.411 "rw_mbytes_per_sec": 0, 00:05:32.411 "r_mbytes_per_sec": 0, 00:05:32.411 "w_mbytes_per_sec": 0 00:05:32.412 }, 00:05:32.412 "claimed": true, 00:05:32.412 "claim_type": "exclusive_write", 00:05:32.412 "zoned": false, 00:05:32.412 "supported_io_types": { 00:05:32.412 "read": true, 00:05:32.412 "write": true, 00:05:32.412 "unmap": true, 00:05:32.412 "flush": true, 00:05:32.412 "reset": true, 00:05:32.412 "nvme_admin": false, 00:05:32.412 "nvme_io": false, 00:05:32.412 "nvme_io_md": false, 00:05:32.412 "write_zeroes": true, 00:05:32.412 "zcopy": true, 00:05:32.412 "get_zone_info": false, 00:05:32.412 "zone_management": false, 00:05:32.412 "zone_append": false, 00:05:32.412 "compare": false, 00:05:32.412 "compare_and_write": false, 00:05:32.412 "abort": true, 00:05:32.412 "seek_hole": false, 00:05:32.412 "seek_data": false, 00:05:32.412 "copy": true, 00:05:32.412 "nvme_iov_md": false 00:05:32.412 }, 00:05:32.412 "memory_domains": [ 00:05:32.412 { 00:05:32.412 "dma_device_id": "system", 00:05:32.412 "dma_device_type": 1 00:05:32.412 }, 00:05:32.412 { 00:05:32.412 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:32.412 "dma_device_type": 2 00:05:32.412 } 00:05:32.412 ], 00:05:32.412 "driver_specific": {} 00:05:32.412 }, 00:05:32.412 { 00:05:32.412 "name": "Passthru0", 00:05:32.412 "aliases": [ 00:05:32.412 "3cbb3634-3619-5b1c-9e2c-e035737b24bb" 00:05:32.412 ], 00:05:32.412 "product_name": "passthru", 00:05:32.412 "block_size": 512, 00:05:32.412 "num_blocks": 16384, 00:05:32.412 "uuid": "3cbb3634-3619-5b1c-9e2c-e035737b24bb", 00:05:32.412 "assigned_rate_limits": { 00:05:32.412 "rw_ios_per_sec": 0, 00:05:32.412 "rw_mbytes_per_sec": 0, 00:05:32.412 "r_mbytes_per_sec": 0, 00:05:32.412 "w_mbytes_per_sec": 0 00:05:32.412 }, 00:05:32.412 "claimed": false, 00:05:32.412 "zoned": false, 00:05:32.412 "supported_io_types": { 00:05:32.412 "read": true, 00:05:32.412 "write": true, 00:05:32.412 "unmap": true, 00:05:32.412 "flush": true, 00:05:32.412 "reset": true, 00:05:32.412 "nvme_admin": false, 00:05:32.412 "nvme_io": false, 00:05:32.412 "nvme_io_md": false, 00:05:32.412 "write_zeroes": true, 00:05:32.412 "zcopy": true, 00:05:32.412 "get_zone_info": false, 00:05:32.412 "zone_management": false, 00:05:32.412 "zone_append": false, 00:05:32.412 "compare": false, 00:05:32.412 "compare_and_write": false, 00:05:32.412 "abort": true, 00:05:32.412 "seek_hole": false, 00:05:32.412 "seek_data": false, 00:05:32.412 "copy": true, 00:05:32.412 "nvme_iov_md": false 00:05:32.412 }, 00:05:32.412 "memory_domains": [ 00:05:32.412 { 00:05:32.412 "dma_device_id": "system", 00:05:32.412 "dma_device_type": 1 00:05:32.412 }, 00:05:32.412 { 00:05:32.412 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:32.412 "dma_device_type": 2 00:05:32.412 } 00:05:32.412 ], 00:05:32.412 "driver_specific": { 00:05:32.412 "passthru": { 00:05:32.412 "name": "Passthru0", 00:05:32.412 "base_bdev_name": "Malloc0" 00:05:32.412 } 00:05:32.412 } 00:05:32.412 } 00:05:32.412 ]' 00:05:32.412 23:23:12 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:32.412 23:23:12 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:32.412 23:23:12 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:32.412 23:23:12 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:32.412 23:23:12 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:32.412 23:23:12 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:32.412 23:23:12 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:32.412 23:23:12 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:32.412 23:23:12 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:32.412 23:23:12 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:32.412 23:23:12 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:32.412 23:23:12 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:32.412 23:23:12 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:32.671 23:23:12 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:32.671 23:23:12 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:32.671 23:23:12 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:32.671 ************************************ 00:05:32.671 END TEST rpc_integrity 00:05:32.671 ************************************ 00:05:32.671 23:23:12 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:32.671 00:05:32.671 real 0m0.320s 00:05:32.671 user 0m0.191s 00:05:32.671 sys 0m0.053s 00:05:32.671 23:23:12 rpc.rpc_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:32.671 23:23:12 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:32.671 23:23:12 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:32.671 23:23:12 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:32.671 23:23:12 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:32.671 23:23:12 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:32.671 ************************************ 00:05:32.671 START TEST rpc_plugins 00:05:32.671 ************************************ 00:05:32.671 23:23:12 rpc.rpc_plugins -- common/autotest_common.sh@1125 -- # rpc_plugins 00:05:32.671 23:23:12 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:32.671 23:23:12 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:32.671 23:23:12 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:32.671 23:23:12 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:32.671 23:23:12 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:32.671 23:23:12 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:32.671 23:23:12 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:32.671 23:23:12 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:32.671 23:23:12 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:32.671 23:23:12 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:32.671 { 00:05:32.671 "name": "Malloc1", 00:05:32.671 "aliases": [ 00:05:32.671 "5d6d2e54-d1c4-4b8b-9459-37dc25a4b225" 00:05:32.671 ], 00:05:32.671 "product_name": "Malloc disk", 00:05:32.671 "block_size": 4096, 00:05:32.671 "num_blocks": 256, 00:05:32.671 "uuid": "5d6d2e54-d1c4-4b8b-9459-37dc25a4b225", 00:05:32.671 "assigned_rate_limits": { 00:05:32.671 "rw_ios_per_sec": 0, 00:05:32.671 "rw_mbytes_per_sec": 0, 00:05:32.671 "r_mbytes_per_sec": 0, 00:05:32.671 "w_mbytes_per_sec": 0 00:05:32.671 }, 00:05:32.671 "claimed": false, 00:05:32.671 "zoned": false, 00:05:32.671 "supported_io_types": { 00:05:32.671 "read": true, 00:05:32.671 "write": true, 00:05:32.671 "unmap": true, 00:05:32.671 "flush": true, 00:05:32.671 "reset": true, 00:05:32.671 "nvme_admin": false, 00:05:32.671 "nvme_io": false, 00:05:32.671 "nvme_io_md": false, 00:05:32.671 "write_zeroes": true, 00:05:32.671 "zcopy": true, 00:05:32.671 "get_zone_info": false, 00:05:32.671 "zone_management": false, 00:05:32.671 "zone_append": false, 00:05:32.671 "compare": false, 00:05:32.671 "compare_and_write": false, 00:05:32.671 "abort": true, 00:05:32.671 "seek_hole": false, 00:05:32.671 "seek_data": false, 00:05:32.671 "copy": true, 00:05:32.671 "nvme_iov_md": false 00:05:32.671 }, 00:05:32.671 "memory_domains": [ 00:05:32.671 { 00:05:32.671 "dma_device_id": "system", 00:05:32.671 "dma_device_type": 1 00:05:32.671 }, 00:05:32.671 { 00:05:32.671 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:32.671 "dma_device_type": 2 00:05:32.671 } 00:05:32.671 ], 00:05:32.671 "driver_specific": {} 00:05:32.671 } 00:05:32.671 ]' 00:05:32.671 23:23:12 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:05:32.671 23:23:12 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:32.671 23:23:12 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:32.671 23:23:12 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:32.671 23:23:12 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:32.671 23:23:12 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:32.671 23:23:12 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:32.671 23:23:12 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:32.671 23:23:12 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:32.671 23:23:12 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:32.671 23:23:12 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:32.671 23:23:12 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:05:32.930 ************************************ 00:05:32.930 END TEST rpc_plugins 00:05:32.930 ************************************ 00:05:32.930 23:23:12 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:32.930 00:05:32.930 real 0m0.168s 00:05:32.930 user 0m0.099s 00:05:32.930 sys 0m0.031s 00:05:32.930 23:23:12 rpc.rpc_plugins -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:32.930 23:23:12 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:32.930 23:23:12 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:32.930 23:23:12 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:32.930 23:23:12 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:32.930 23:23:12 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:32.930 ************************************ 00:05:32.930 START TEST rpc_trace_cmd_test 00:05:32.930 ************************************ 00:05:32.930 23:23:12 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1125 -- # rpc_trace_cmd_test 00:05:32.930 23:23:12 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:05:32.930 23:23:12 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:32.930 23:23:12 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:32.930 23:23:12 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:32.930 23:23:12 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:32.930 23:23:12 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:05:32.930 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid69151", 00:05:32.930 "tpoint_group_mask": "0x8", 00:05:32.930 "iscsi_conn": { 00:05:32.930 "mask": "0x2", 00:05:32.930 "tpoint_mask": "0x0" 00:05:32.930 }, 00:05:32.930 "scsi": { 00:05:32.930 "mask": "0x4", 00:05:32.930 "tpoint_mask": "0x0" 00:05:32.930 }, 00:05:32.930 "bdev": { 00:05:32.930 "mask": "0x8", 00:05:32.930 "tpoint_mask": "0xffffffffffffffff" 00:05:32.930 }, 00:05:32.930 "nvmf_rdma": { 00:05:32.930 "mask": "0x10", 00:05:32.930 "tpoint_mask": "0x0" 00:05:32.930 }, 00:05:32.930 "nvmf_tcp": { 00:05:32.930 "mask": "0x20", 00:05:32.930 "tpoint_mask": "0x0" 00:05:32.930 }, 00:05:32.930 "ftl": { 00:05:32.930 "mask": "0x40", 00:05:32.930 "tpoint_mask": "0x0" 00:05:32.930 }, 00:05:32.930 "blobfs": { 00:05:32.930 "mask": "0x80", 00:05:32.930 "tpoint_mask": "0x0" 00:05:32.930 }, 00:05:32.930 "dsa": { 00:05:32.930 "mask": "0x200", 00:05:32.930 "tpoint_mask": "0x0" 00:05:32.930 }, 00:05:32.930 "thread": { 00:05:32.930 "mask": "0x400", 00:05:32.930 "tpoint_mask": "0x0" 00:05:32.930 }, 00:05:32.930 "nvme_pcie": { 00:05:32.930 "mask": "0x800", 00:05:32.930 "tpoint_mask": "0x0" 00:05:32.930 }, 00:05:32.930 "iaa": { 00:05:32.930 "mask": "0x1000", 00:05:32.930 "tpoint_mask": "0x0" 00:05:32.930 }, 00:05:32.930 "nvme_tcp": { 00:05:32.930 "mask": "0x2000", 00:05:32.930 "tpoint_mask": "0x0" 00:05:32.930 }, 00:05:32.930 "bdev_nvme": { 00:05:32.930 "mask": "0x4000", 00:05:32.930 "tpoint_mask": "0x0" 00:05:32.930 }, 00:05:32.930 "sock": { 00:05:32.930 "mask": "0x8000", 00:05:32.930 "tpoint_mask": "0x0" 00:05:32.930 }, 00:05:32.930 "blob": { 00:05:32.930 "mask": "0x10000", 00:05:32.930 "tpoint_mask": "0x0" 00:05:32.930 }, 00:05:32.930 "bdev_raid": { 00:05:32.930 "mask": "0x20000", 00:05:32.930 "tpoint_mask": "0x0" 00:05:32.930 } 00:05:32.930 }' 00:05:32.931 23:23:12 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:05:32.931 23:23:12 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 18 -gt 2 ']' 00:05:32.931 23:23:12 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:32.931 23:23:12 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:32.931 23:23:12 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:33.189 23:23:12 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:33.189 23:23:12 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:33.189 23:23:12 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:33.189 23:23:12 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:33.189 ************************************ 00:05:33.189 END TEST rpc_trace_cmd_test 00:05:33.189 ************************************ 00:05:33.189 23:23:12 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:33.189 00:05:33.189 real 0m0.268s 00:05:33.189 user 0m0.199s 00:05:33.189 sys 0m0.054s 00:05:33.189 23:23:12 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:33.189 23:23:12 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:33.189 23:23:12 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:05:33.189 23:23:12 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:33.189 23:23:12 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:33.189 23:23:12 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:33.189 23:23:12 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:33.189 23:23:12 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:33.189 ************************************ 00:05:33.189 START TEST rpc_daemon_integrity 00:05:33.189 ************************************ 00:05:33.189 23:23:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:05:33.189 23:23:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:33.189 23:23:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:33.189 23:23:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:33.189 23:23:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:33.189 23:23:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:33.189 23:23:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:33.189 23:23:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:33.189 23:23:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:33.189 23:23:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:33.189 23:23:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:33.189 23:23:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:33.190 23:23:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:05:33.190 23:23:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:33.190 23:23:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:33.190 23:23:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:33.449 23:23:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:33.449 23:23:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:33.449 { 00:05:33.449 "name": "Malloc2", 00:05:33.449 "aliases": [ 00:05:33.449 "a2427580-7421-4de8-b614-dba52ee4ab66" 00:05:33.449 ], 00:05:33.449 "product_name": "Malloc disk", 00:05:33.449 "block_size": 512, 00:05:33.449 "num_blocks": 16384, 00:05:33.449 "uuid": "a2427580-7421-4de8-b614-dba52ee4ab66", 00:05:33.449 "assigned_rate_limits": { 00:05:33.449 "rw_ios_per_sec": 0, 00:05:33.449 "rw_mbytes_per_sec": 0, 00:05:33.449 "r_mbytes_per_sec": 0, 00:05:33.449 "w_mbytes_per_sec": 0 00:05:33.449 }, 00:05:33.449 "claimed": false, 00:05:33.449 "zoned": false, 00:05:33.449 "supported_io_types": { 00:05:33.449 "read": true, 00:05:33.449 "write": true, 00:05:33.449 "unmap": true, 00:05:33.449 "flush": true, 00:05:33.449 "reset": true, 00:05:33.449 "nvme_admin": false, 00:05:33.449 "nvme_io": false, 00:05:33.449 "nvme_io_md": false, 00:05:33.449 "write_zeroes": true, 00:05:33.449 "zcopy": true, 00:05:33.449 "get_zone_info": false, 00:05:33.449 "zone_management": false, 00:05:33.449 "zone_append": false, 00:05:33.449 "compare": false, 00:05:33.449 "compare_and_write": false, 00:05:33.449 "abort": true, 00:05:33.449 "seek_hole": false, 00:05:33.449 "seek_data": false, 00:05:33.449 "copy": true, 00:05:33.449 "nvme_iov_md": false 00:05:33.449 }, 00:05:33.449 "memory_domains": [ 00:05:33.449 { 00:05:33.449 "dma_device_id": "system", 00:05:33.449 "dma_device_type": 1 00:05:33.449 }, 00:05:33.449 { 00:05:33.449 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:33.449 "dma_device_type": 2 00:05:33.449 } 00:05:33.449 ], 00:05:33.449 "driver_specific": {} 00:05:33.449 } 00:05:33.449 ]' 00:05:33.449 23:23:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:33.449 23:23:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:33.449 23:23:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:05:33.449 23:23:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:33.449 23:23:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:33.449 [2024-09-30 23:23:13.120519] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:05:33.449 [2024-09-30 23:23:13.120601] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:33.449 [2024-09-30 23:23:13.120634] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:05:33.449 [2024-09-30 23:23:13.120644] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:33.449 [2024-09-30 23:23:13.123133] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:33.449 [2024-09-30 23:23:13.123180] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:33.449 Passthru0 00:05:33.449 23:23:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:33.449 23:23:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:33.449 23:23:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:33.449 23:23:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:33.449 23:23:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:33.449 23:23:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:33.449 { 00:05:33.449 "name": "Malloc2", 00:05:33.449 "aliases": [ 00:05:33.449 "a2427580-7421-4de8-b614-dba52ee4ab66" 00:05:33.449 ], 00:05:33.449 "product_name": "Malloc disk", 00:05:33.449 "block_size": 512, 00:05:33.449 "num_blocks": 16384, 00:05:33.449 "uuid": "a2427580-7421-4de8-b614-dba52ee4ab66", 00:05:33.449 "assigned_rate_limits": { 00:05:33.449 "rw_ios_per_sec": 0, 00:05:33.449 "rw_mbytes_per_sec": 0, 00:05:33.449 "r_mbytes_per_sec": 0, 00:05:33.449 "w_mbytes_per_sec": 0 00:05:33.449 }, 00:05:33.449 "claimed": true, 00:05:33.449 "claim_type": "exclusive_write", 00:05:33.449 "zoned": false, 00:05:33.449 "supported_io_types": { 00:05:33.449 "read": true, 00:05:33.449 "write": true, 00:05:33.449 "unmap": true, 00:05:33.449 "flush": true, 00:05:33.449 "reset": true, 00:05:33.449 "nvme_admin": false, 00:05:33.449 "nvme_io": false, 00:05:33.449 "nvme_io_md": false, 00:05:33.449 "write_zeroes": true, 00:05:33.449 "zcopy": true, 00:05:33.449 "get_zone_info": false, 00:05:33.449 "zone_management": false, 00:05:33.449 "zone_append": false, 00:05:33.449 "compare": false, 00:05:33.449 "compare_and_write": false, 00:05:33.449 "abort": true, 00:05:33.449 "seek_hole": false, 00:05:33.449 "seek_data": false, 00:05:33.449 "copy": true, 00:05:33.449 "nvme_iov_md": false 00:05:33.449 }, 00:05:33.449 "memory_domains": [ 00:05:33.449 { 00:05:33.449 "dma_device_id": "system", 00:05:33.449 "dma_device_type": 1 00:05:33.449 }, 00:05:33.449 { 00:05:33.449 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:33.449 "dma_device_type": 2 00:05:33.449 } 00:05:33.449 ], 00:05:33.449 "driver_specific": {} 00:05:33.449 }, 00:05:33.449 { 00:05:33.449 "name": "Passthru0", 00:05:33.449 "aliases": [ 00:05:33.449 "6347abac-1af3-5dfd-92e8-4d694c00d5e8" 00:05:33.449 ], 00:05:33.449 "product_name": "passthru", 00:05:33.449 "block_size": 512, 00:05:33.449 "num_blocks": 16384, 00:05:33.449 "uuid": "6347abac-1af3-5dfd-92e8-4d694c00d5e8", 00:05:33.449 "assigned_rate_limits": { 00:05:33.449 "rw_ios_per_sec": 0, 00:05:33.449 "rw_mbytes_per_sec": 0, 00:05:33.449 "r_mbytes_per_sec": 0, 00:05:33.449 "w_mbytes_per_sec": 0 00:05:33.449 }, 00:05:33.449 "claimed": false, 00:05:33.449 "zoned": false, 00:05:33.449 "supported_io_types": { 00:05:33.449 "read": true, 00:05:33.449 "write": true, 00:05:33.449 "unmap": true, 00:05:33.449 "flush": true, 00:05:33.449 "reset": true, 00:05:33.449 "nvme_admin": false, 00:05:33.449 "nvme_io": false, 00:05:33.449 "nvme_io_md": false, 00:05:33.449 "write_zeroes": true, 00:05:33.449 "zcopy": true, 00:05:33.449 "get_zone_info": false, 00:05:33.449 "zone_management": false, 00:05:33.449 "zone_append": false, 00:05:33.449 "compare": false, 00:05:33.449 "compare_and_write": false, 00:05:33.449 "abort": true, 00:05:33.449 "seek_hole": false, 00:05:33.449 "seek_data": false, 00:05:33.449 "copy": true, 00:05:33.449 "nvme_iov_md": false 00:05:33.449 }, 00:05:33.449 "memory_domains": [ 00:05:33.449 { 00:05:33.449 "dma_device_id": "system", 00:05:33.449 "dma_device_type": 1 00:05:33.449 }, 00:05:33.449 { 00:05:33.449 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:33.449 "dma_device_type": 2 00:05:33.449 } 00:05:33.449 ], 00:05:33.449 "driver_specific": { 00:05:33.449 "passthru": { 00:05:33.449 "name": "Passthru0", 00:05:33.449 "base_bdev_name": "Malloc2" 00:05:33.449 } 00:05:33.449 } 00:05:33.449 } 00:05:33.449 ]' 00:05:33.449 23:23:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:33.449 23:23:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:33.449 23:23:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:33.449 23:23:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:33.449 23:23:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:33.449 23:23:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:33.449 23:23:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:33.449 23:23:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:33.449 23:23:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:33.449 23:23:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:33.450 23:23:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:33.450 23:23:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:33.450 23:23:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:33.450 23:23:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:33.450 23:23:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:33.450 23:23:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:33.450 23:23:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:33.450 00:05:33.450 real 0m0.341s 00:05:33.450 user 0m0.207s 00:05:33.450 sys 0m0.058s 00:05:33.450 23:23:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:33.450 23:23:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:33.450 ************************************ 00:05:33.450 END TEST rpc_daemon_integrity 00:05:33.450 ************************************ 00:05:33.708 23:23:13 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:33.708 23:23:13 rpc -- rpc/rpc.sh@84 -- # killprocess 69151 00:05:33.708 23:23:13 rpc -- common/autotest_common.sh@950 -- # '[' -z 69151 ']' 00:05:33.708 23:23:13 rpc -- common/autotest_common.sh@954 -- # kill -0 69151 00:05:33.708 23:23:13 rpc -- common/autotest_common.sh@955 -- # uname 00:05:33.708 23:23:13 rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:33.708 23:23:13 rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 69151 00:05:33.708 killing process with pid 69151 00:05:33.708 23:23:13 rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:33.708 23:23:13 rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:33.708 23:23:13 rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 69151' 00:05:33.708 23:23:13 rpc -- common/autotest_common.sh@969 -- # kill 69151 00:05:33.708 23:23:13 rpc -- common/autotest_common.sh@974 -- # wait 69151 00:05:33.967 00:05:33.967 real 0m2.993s 00:05:33.967 user 0m3.621s 00:05:33.967 sys 0m0.917s 00:05:33.967 23:23:13 rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:33.967 23:23:13 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:33.967 ************************************ 00:05:33.967 END TEST rpc 00:05:33.967 ************************************ 00:05:34.225 23:23:13 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:05:34.225 23:23:13 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:34.225 23:23:13 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:34.225 23:23:13 -- common/autotest_common.sh@10 -- # set +x 00:05:34.225 ************************************ 00:05:34.225 START TEST skip_rpc 00:05:34.225 ************************************ 00:05:34.225 23:23:13 skip_rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:05:34.225 * Looking for test storage... 00:05:34.225 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:05:34.225 23:23:13 skip_rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:34.225 23:23:13 skip_rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:05:34.225 23:23:13 skip_rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:34.225 23:23:14 skip_rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:34.225 23:23:14 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:34.225 23:23:14 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:34.225 23:23:14 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:34.225 23:23:14 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:34.225 23:23:14 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:34.225 23:23:14 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:34.225 23:23:14 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:34.225 23:23:14 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:34.225 23:23:14 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:34.225 23:23:14 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:34.225 23:23:14 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:34.225 23:23:14 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:34.225 23:23:14 skip_rpc -- scripts/common.sh@345 -- # : 1 00:05:34.225 23:23:14 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:34.225 23:23:14 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:34.485 23:23:14 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:34.485 23:23:14 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:05:34.485 23:23:14 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:34.485 23:23:14 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:05:34.485 23:23:14 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:34.485 23:23:14 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:34.485 23:23:14 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:05:34.485 23:23:14 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:34.485 23:23:14 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:05:34.485 23:23:14 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:34.485 23:23:14 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:34.485 23:23:14 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:34.485 23:23:14 skip_rpc -- scripts/common.sh@368 -- # return 0 00:05:34.485 23:23:14 skip_rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:34.485 23:23:14 skip_rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:34.485 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:34.485 --rc genhtml_branch_coverage=1 00:05:34.485 --rc genhtml_function_coverage=1 00:05:34.485 --rc genhtml_legend=1 00:05:34.485 --rc geninfo_all_blocks=1 00:05:34.485 --rc geninfo_unexecuted_blocks=1 00:05:34.485 00:05:34.485 ' 00:05:34.485 23:23:14 skip_rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:34.485 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:34.485 --rc genhtml_branch_coverage=1 00:05:34.485 --rc genhtml_function_coverage=1 00:05:34.485 --rc genhtml_legend=1 00:05:34.485 --rc geninfo_all_blocks=1 00:05:34.485 --rc geninfo_unexecuted_blocks=1 00:05:34.485 00:05:34.485 ' 00:05:34.485 23:23:14 skip_rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:34.485 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:34.485 --rc genhtml_branch_coverage=1 00:05:34.485 --rc genhtml_function_coverage=1 00:05:34.485 --rc genhtml_legend=1 00:05:34.485 --rc geninfo_all_blocks=1 00:05:34.485 --rc geninfo_unexecuted_blocks=1 00:05:34.485 00:05:34.485 ' 00:05:34.485 23:23:14 skip_rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:34.485 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:34.485 --rc genhtml_branch_coverage=1 00:05:34.485 --rc genhtml_function_coverage=1 00:05:34.485 --rc genhtml_legend=1 00:05:34.485 --rc geninfo_all_blocks=1 00:05:34.485 --rc geninfo_unexecuted_blocks=1 00:05:34.485 00:05:34.485 ' 00:05:34.485 23:23:14 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:34.485 23:23:14 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:34.485 23:23:14 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:05:34.485 23:23:14 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:34.485 23:23:14 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:34.485 23:23:14 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:34.485 ************************************ 00:05:34.485 START TEST skip_rpc 00:05:34.485 ************************************ 00:05:34.485 23:23:14 skip_rpc.skip_rpc -- common/autotest_common.sh@1125 -- # test_skip_rpc 00:05:34.485 23:23:14 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=69358 00:05:34.485 23:23:14 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:34.485 23:23:14 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:05:34.485 23:23:14 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:05:34.485 [2024-09-30 23:23:14.206946] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:05:34.485 [2024-09-30 23:23:14.207160] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69358 ] 00:05:34.744 [2024-09-30 23:23:14.368996] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:34.744 [2024-09-30 23:23:14.420476] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:40.023 23:23:19 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:05:40.023 23:23:19 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # local es=0 00:05:40.023 23:23:19 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd spdk_get_version 00:05:40.023 23:23:19 skip_rpc.skip_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:05:40.023 23:23:19 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:40.023 23:23:19 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:05:40.023 23:23:19 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:40.023 23:23:19 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # rpc_cmd spdk_get_version 00:05:40.023 23:23:19 skip_rpc.skip_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:40.023 23:23:19 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:40.023 23:23:19 skip_rpc.skip_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:05:40.023 23:23:19 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # es=1 00:05:40.023 23:23:19 skip_rpc.skip_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:40.023 23:23:19 skip_rpc.skip_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:40.023 23:23:19 skip_rpc.skip_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:40.023 23:23:19 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:05:40.023 23:23:19 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 69358 00:05:40.023 23:23:19 skip_rpc.skip_rpc -- common/autotest_common.sh@950 -- # '[' -z 69358 ']' 00:05:40.023 23:23:19 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # kill -0 69358 00:05:40.023 23:23:19 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # uname 00:05:40.023 23:23:19 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:40.023 23:23:19 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 69358 00:05:40.023 23:23:19 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:40.023 23:23:19 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:40.023 23:23:19 skip_rpc.skip_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 69358' 00:05:40.023 killing process with pid 69358 00:05:40.023 23:23:19 skip_rpc.skip_rpc -- common/autotest_common.sh@969 -- # kill 69358 00:05:40.023 23:23:19 skip_rpc.skip_rpc -- common/autotest_common.sh@974 -- # wait 69358 00:05:40.023 00:05:40.023 real 0m5.451s 00:05:40.023 user 0m5.038s 00:05:40.023 sys 0m0.340s 00:05:40.023 23:23:19 skip_rpc.skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:40.023 23:23:19 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:40.023 ************************************ 00:05:40.023 END TEST skip_rpc 00:05:40.023 ************************************ 00:05:40.023 23:23:19 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:05:40.023 23:23:19 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:40.023 23:23:19 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:40.023 23:23:19 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:40.023 ************************************ 00:05:40.023 START TEST skip_rpc_with_json 00:05:40.023 ************************************ 00:05:40.023 23:23:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_json 00:05:40.023 23:23:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:05:40.023 23:23:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=69446 00:05:40.023 23:23:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:40.023 23:23:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:40.023 23:23:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 69446 00:05:40.023 23:23:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@831 -- # '[' -z 69446 ']' 00:05:40.023 23:23:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:40.023 23:23:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:40.023 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:40.023 23:23:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:40.023 23:23:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:40.023 23:23:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:40.023 [2024-09-30 23:23:19.727397] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:05:40.023 [2024-09-30 23:23:19.727617] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69446 ] 00:05:40.282 [2024-09-30 23:23:19.887631] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:40.282 [2024-09-30 23:23:19.932220] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:40.852 23:23:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:40.852 23:23:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # return 0 00:05:40.852 23:23:20 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:05:40.852 23:23:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:40.852 23:23:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:40.852 [2024-09-30 23:23:20.534936] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:05:40.852 request: 00:05:40.852 { 00:05:40.852 "trtype": "tcp", 00:05:40.852 "method": "nvmf_get_transports", 00:05:40.852 "req_id": 1 00:05:40.852 } 00:05:40.852 Got JSON-RPC error response 00:05:40.852 response: 00:05:40.852 { 00:05:40.852 "code": -19, 00:05:40.852 "message": "No such device" 00:05:40.852 } 00:05:40.852 23:23:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:05:40.852 23:23:20 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:05:40.852 23:23:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:40.852 23:23:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:40.852 [2024-09-30 23:23:20.547019] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:40.852 23:23:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:40.852 23:23:20 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:05:40.852 23:23:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:40.852 23:23:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:41.113 23:23:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:41.113 23:23:20 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:41.113 { 00:05:41.113 "subsystems": [ 00:05:41.113 { 00:05:41.113 "subsystem": "fsdev", 00:05:41.113 "config": [ 00:05:41.113 { 00:05:41.113 "method": "fsdev_set_opts", 00:05:41.113 "params": { 00:05:41.113 "fsdev_io_pool_size": 65535, 00:05:41.113 "fsdev_io_cache_size": 256 00:05:41.113 } 00:05:41.113 } 00:05:41.113 ] 00:05:41.113 }, 00:05:41.113 { 00:05:41.113 "subsystem": "keyring", 00:05:41.113 "config": [] 00:05:41.113 }, 00:05:41.113 { 00:05:41.113 "subsystem": "iobuf", 00:05:41.113 "config": [ 00:05:41.113 { 00:05:41.113 "method": "iobuf_set_options", 00:05:41.113 "params": { 00:05:41.113 "small_pool_count": 8192, 00:05:41.113 "large_pool_count": 1024, 00:05:41.113 "small_bufsize": 8192, 00:05:41.113 "large_bufsize": 135168 00:05:41.113 } 00:05:41.113 } 00:05:41.113 ] 00:05:41.113 }, 00:05:41.113 { 00:05:41.113 "subsystem": "sock", 00:05:41.113 "config": [ 00:05:41.113 { 00:05:41.113 "method": "sock_set_default_impl", 00:05:41.113 "params": { 00:05:41.113 "impl_name": "posix" 00:05:41.113 } 00:05:41.113 }, 00:05:41.113 { 00:05:41.113 "method": "sock_impl_set_options", 00:05:41.113 "params": { 00:05:41.113 "impl_name": "ssl", 00:05:41.113 "recv_buf_size": 4096, 00:05:41.113 "send_buf_size": 4096, 00:05:41.113 "enable_recv_pipe": true, 00:05:41.113 "enable_quickack": false, 00:05:41.113 "enable_placement_id": 0, 00:05:41.113 "enable_zerocopy_send_server": true, 00:05:41.113 "enable_zerocopy_send_client": false, 00:05:41.113 "zerocopy_threshold": 0, 00:05:41.113 "tls_version": 0, 00:05:41.113 "enable_ktls": false 00:05:41.113 } 00:05:41.113 }, 00:05:41.113 { 00:05:41.113 "method": "sock_impl_set_options", 00:05:41.113 "params": { 00:05:41.113 "impl_name": "posix", 00:05:41.113 "recv_buf_size": 2097152, 00:05:41.113 "send_buf_size": 2097152, 00:05:41.113 "enable_recv_pipe": true, 00:05:41.113 "enable_quickack": false, 00:05:41.113 "enable_placement_id": 0, 00:05:41.113 "enable_zerocopy_send_server": true, 00:05:41.113 "enable_zerocopy_send_client": false, 00:05:41.113 "zerocopy_threshold": 0, 00:05:41.113 "tls_version": 0, 00:05:41.113 "enable_ktls": false 00:05:41.113 } 00:05:41.113 } 00:05:41.113 ] 00:05:41.113 }, 00:05:41.113 { 00:05:41.113 "subsystem": "vmd", 00:05:41.113 "config": [] 00:05:41.113 }, 00:05:41.113 { 00:05:41.113 "subsystem": "accel", 00:05:41.113 "config": [ 00:05:41.113 { 00:05:41.113 "method": "accel_set_options", 00:05:41.113 "params": { 00:05:41.113 "small_cache_size": 128, 00:05:41.113 "large_cache_size": 16, 00:05:41.113 "task_count": 2048, 00:05:41.113 "sequence_count": 2048, 00:05:41.113 "buf_count": 2048 00:05:41.113 } 00:05:41.113 } 00:05:41.113 ] 00:05:41.113 }, 00:05:41.113 { 00:05:41.113 "subsystem": "bdev", 00:05:41.113 "config": [ 00:05:41.113 { 00:05:41.113 "method": "bdev_set_options", 00:05:41.113 "params": { 00:05:41.113 "bdev_io_pool_size": 65535, 00:05:41.113 "bdev_io_cache_size": 256, 00:05:41.113 "bdev_auto_examine": true, 00:05:41.113 "iobuf_small_cache_size": 128, 00:05:41.113 "iobuf_large_cache_size": 16 00:05:41.113 } 00:05:41.113 }, 00:05:41.113 { 00:05:41.113 "method": "bdev_raid_set_options", 00:05:41.113 "params": { 00:05:41.113 "process_window_size_kb": 1024, 00:05:41.113 "process_max_bandwidth_mb_sec": 0 00:05:41.113 } 00:05:41.113 }, 00:05:41.113 { 00:05:41.113 "method": "bdev_iscsi_set_options", 00:05:41.113 "params": { 00:05:41.113 "timeout_sec": 30 00:05:41.113 } 00:05:41.113 }, 00:05:41.113 { 00:05:41.113 "method": "bdev_nvme_set_options", 00:05:41.113 "params": { 00:05:41.113 "action_on_timeout": "none", 00:05:41.113 "timeout_us": 0, 00:05:41.113 "timeout_admin_us": 0, 00:05:41.113 "keep_alive_timeout_ms": 10000, 00:05:41.113 "arbitration_burst": 0, 00:05:41.113 "low_priority_weight": 0, 00:05:41.113 "medium_priority_weight": 0, 00:05:41.113 "high_priority_weight": 0, 00:05:41.113 "nvme_adminq_poll_period_us": 10000, 00:05:41.113 "nvme_ioq_poll_period_us": 0, 00:05:41.113 "io_queue_requests": 0, 00:05:41.113 "delay_cmd_submit": true, 00:05:41.113 "transport_retry_count": 4, 00:05:41.113 "bdev_retry_count": 3, 00:05:41.113 "transport_ack_timeout": 0, 00:05:41.113 "ctrlr_loss_timeout_sec": 0, 00:05:41.113 "reconnect_delay_sec": 0, 00:05:41.113 "fast_io_fail_timeout_sec": 0, 00:05:41.113 "disable_auto_failback": false, 00:05:41.113 "generate_uuids": false, 00:05:41.113 "transport_tos": 0, 00:05:41.113 "nvme_error_stat": false, 00:05:41.113 "rdma_srq_size": 0, 00:05:41.113 "io_path_stat": false, 00:05:41.113 "allow_accel_sequence": false, 00:05:41.113 "rdma_max_cq_size": 0, 00:05:41.113 "rdma_cm_event_timeout_ms": 0, 00:05:41.113 "dhchap_digests": [ 00:05:41.113 "sha256", 00:05:41.113 "sha384", 00:05:41.113 "sha512" 00:05:41.113 ], 00:05:41.113 "dhchap_dhgroups": [ 00:05:41.113 "null", 00:05:41.113 "ffdhe2048", 00:05:41.113 "ffdhe3072", 00:05:41.113 "ffdhe4096", 00:05:41.113 "ffdhe6144", 00:05:41.113 "ffdhe8192" 00:05:41.113 ] 00:05:41.113 } 00:05:41.113 }, 00:05:41.113 { 00:05:41.113 "method": "bdev_nvme_set_hotplug", 00:05:41.113 "params": { 00:05:41.113 "period_us": 100000, 00:05:41.113 "enable": false 00:05:41.113 } 00:05:41.113 }, 00:05:41.113 { 00:05:41.113 "method": "bdev_wait_for_examine" 00:05:41.113 } 00:05:41.113 ] 00:05:41.113 }, 00:05:41.113 { 00:05:41.113 "subsystem": "scsi", 00:05:41.113 "config": null 00:05:41.113 }, 00:05:41.113 { 00:05:41.113 "subsystem": "scheduler", 00:05:41.113 "config": [ 00:05:41.113 { 00:05:41.113 "method": "framework_set_scheduler", 00:05:41.113 "params": { 00:05:41.113 "name": "static" 00:05:41.113 } 00:05:41.113 } 00:05:41.113 ] 00:05:41.113 }, 00:05:41.113 { 00:05:41.113 "subsystem": "vhost_scsi", 00:05:41.113 "config": [] 00:05:41.113 }, 00:05:41.113 { 00:05:41.113 "subsystem": "vhost_blk", 00:05:41.113 "config": [] 00:05:41.113 }, 00:05:41.113 { 00:05:41.113 "subsystem": "ublk", 00:05:41.113 "config": [] 00:05:41.113 }, 00:05:41.113 { 00:05:41.113 "subsystem": "nbd", 00:05:41.113 "config": [] 00:05:41.113 }, 00:05:41.113 { 00:05:41.113 "subsystem": "nvmf", 00:05:41.113 "config": [ 00:05:41.113 { 00:05:41.113 "method": "nvmf_set_config", 00:05:41.113 "params": { 00:05:41.113 "discovery_filter": "match_any", 00:05:41.113 "admin_cmd_passthru": { 00:05:41.113 "identify_ctrlr": false 00:05:41.113 }, 00:05:41.113 "dhchap_digests": [ 00:05:41.113 "sha256", 00:05:41.113 "sha384", 00:05:41.113 "sha512" 00:05:41.113 ], 00:05:41.113 "dhchap_dhgroups": [ 00:05:41.113 "null", 00:05:41.113 "ffdhe2048", 00:05:41.113 "ffdhe3072", 00:05:41.113 "ffdhe4096", 00:05:41.113 "ffdhe6144", 00:05:41.113 "ffdhe8192" 00:05:41.113 ] 00:05:41.113 } 00:05:41.113 }, 00:05:41.113 { 00:05:41.113 "method": "nvmf_set_max_subsystems", 00:05:41.113 "params": { 00:05:41.113 "max_subsystems": 1024 00:05:41.113 } 00:05:41.113 }, 00:05:41.113 { 00:05:41.113 "method": "nvmf_set_crdt", 00:05:41.113 "params": { 00:05:41.113 "crdt1": 0, 00:05:41.113 "crdt2": 0, 00:05:41.113 "crdt3": 0 00:05:41.113 } 00:05:41.113 }, 00:05:41.113 { 00:05:41.113 "method": "nvmf_create_transport", 00:05:41.113 "params": { 00:05:41.113 "trtype": "TCP", 00:05:41.113 "max_queue_depth": 128, 00:05:41.113 "max_io_qpairs_per_ctrlr": 127, 00:05:41.113 "in_capsule_data_size": 4096, 00:05:41.113 "max_io_size": 131072, 00:05:41.113 "io_unit_size": 131072, 00:05:41.113 "max_aq_depth": 128, 00:05:41.113 "num_shared_buffers": 511, 00:05:41.113 "buf_cache_size": 4294967295, 00:05:41.113 "dif_insert_or_strip": false, 00:05:41.113 "zcopy": false, 00:05:41.113 "c2h_success": true, 00:05:41.113 "sock_priority": 0, 00:05:41.113 "abort_timeout_sec": 1, 00:05:41.113 "ack_timeout": 0, 00:05:41.113 "data_wr_pool_size": 0 00:05:41.113 } 00:05:41.113 } 00:05:41.113 ] 00:05:41.113 }, 00:05:41.113 { 00:05:41.113 "subsystem": "iscsi", 00:05:41.114 "config": [ 00:05:41.114 { 00:05:41.114 "method": "iscsi_set_options", 00:05:41.114 "params": { 00:05:41.114 "node_base": "iqn.2016-06.io.spdk", 00:05:41.114 "max_sessions": 128, 00:05:41.114 "max_connections_per_session": 2, 00:05:41.114 "max_queue_depth": 64, 00:05:41.114 "default_time2wait": 2, 00:05:41.114 "default_time2retain": 20, 00:05:41.114 "first_burst_length": 8192, 00:05:41.114 "immediate_data": true, 00:05:41.114 "allow_duplicated_isid": false, 00:05:41.114 "error_recovery_level": 0, 00:05:41.114 "nop_timeout": 60, 00:05:41.114 "nop_in_interval": 30, 00:05:41.114 "disable_chap": false, 00:05:41.114 "require_chap": false, 00:05:41.114 "mutual_chap": false, 00:05:41.114 "chap_group": 0, 00:05:41.114 "max_large_datain_per_connection": 64, 00:05:41.114 "max_r2t_per_connection": 4, 00:05:41.114 "pdu_pool_size": 36864, 00:05:41.114 "immediate_data_pool_size": 16384, 00:05:41.114 "data_out_pool_size": 2048 00:05:41.114 } 00:05:41.114 } 00:05:41.114 ] 00:05:41.114 } 00:05:41.114 ] 00:05:41.114 } 00:05:41.114 23:23:20 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:05:41.114 23:23:20 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 69446 00:05:41.114 23:23:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 69446 ']' 00:05:41.114 23:23:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 69446 00:05:41.114 23:23:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:05:41.114 23:23:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:41.114 23:23:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 69446 00:05:41.114 killing process with pid 69446 00:05:41.114 23:23:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:41.114 23:23:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:41.114 23:23:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 69446' 00:05:41.114 23:23:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 69446 00:05:41.114 23:23:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 69446 00:05:41.374 23:23:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=69469 00:05:41.374 23:23:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:41.374 23:23:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:05:46.653 23:23:26 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 69469 00:05:46.653 23:23:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 69469 ']' 00:05:46.653 23:23:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 69469 00:05:46.653 23:23:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:05:46.653 23:23:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:46.653 23:23:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 69469 00:05:46.653 killing process with pid 69469 00:05:46.653 23:23:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:46.653 23:23:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:46.653 23:23:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 69469' 00:05:46.653 23:23:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 69469 00:05:46.653 23:23:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 69469 00:05:46.912 23:23:26 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:46.912 23:23:26 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:46.912 00:05:46.912 real 0m6.966s 00:05:46.912 user 0m6.497s 00:05:46.912 sys 0m0.740s 00:05:46.912 23:23:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:46.912 23:23:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:46.912 ************************************ 00:05:46.912 END TEST skip_rpc_with_json 00:05:46.912 ************************************ 00:05:46.912 23:23:26 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:05:46.912 23:23:26 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:46.912 23:23:26 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:46.912 23:23:26 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:46.912 ************************************ 00:05:46.912 START TEST skip_rpc_with_delay 00:05:46.912 ************************************ 00:05:46.912 23:23:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_delay 00:05:46.912 23:23:26 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:46.912 23:23:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # local es=0 00:05:46.912 23:23:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:46.912 23:23:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:46.912 23:23:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:46.912 23:23:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:46.913 23:23:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:46.913 23:23:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:46.913 23:23:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:46.913 23:23:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:46.913 23:23:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:05:46.913 23:23:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:46.913 [2024-09-30 23:23:26.762413] app.c: 840:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:05:46.913 [2024-09-30 23:23:26.762613] app.c: 719:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:05:47.177 ************************************ 00:05:47.177 END TEST skip_rpc_with_delay 00:05:47.177 ************************************ 00:05:47.177 23:23:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # es=1 00:05:47.177 23:23:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:47.177 23:23:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:47.177 23:23:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:47.177 00:05:47.177 real 0m0.161s 00:05:47.177 user 0m0.086s 00:05:47.177 sys 0m0.074s 00:05:47.178 23:23:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:47.178 23:23:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:05:47.178 23:23:26 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:05:47.178 23:23:26 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:05:47.178 23:23:26 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:05:47.178 23:23:26 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:47.178 23:23:26 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:47.178 23:23:26 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:47.178 ************************************ 00:05:47.178 START TEST exit_on_failed_rpc_init 00:05:47.178 ************************************ 00:05:47.178 23:23:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1125 -- # test_exit_on_failed_rpc_init 00:05:47.178 23:23:26 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=69586 00:05:47.178 23:23:26 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:47.178 23:23:26 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 69586 00:05:47.178 23:23:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@831 -- # '[' -z 69586 ']' 00:05:47.178 23:23:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:47.178 23:23:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:47.178 23:23:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:47.178 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:47.178 23:23:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:47.178 23:23:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:47.178 [2024-09-30 23:23:27.007093] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:05:47.178 [2024-09-30 23:23:27.007296] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69586 ] 00:05:47.451 [2024-09-30 23:23:27.176210] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:47.451 [2024-09-30 23:23:27.222480] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:48.043 23:23:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:48.043 23:23:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # return 0 00:05:48.043 23:23:27 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:48.043 23:23:27 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:48.043 23:23:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # local es=0 00:05:48.043 23:23:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:48.043 23:23:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:48.043 23:23:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:48.043 23:23:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:48.043 23:23:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:48.043 23:23:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:48.043 23:23:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:48.043 23:23:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:48.043 23:23:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:05:48.043 23:23:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:48.302 [2024-09-30 23:23:27.919785] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:05:48.302 [2024-09-30 23:23:27.920034] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69604 ] 00:05:48.302 [2024-09-30 23:23:28.083343] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:48.302 [2024-09-30 23:23:28.153398] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:05:48.302 [2024-09-30 23:23:28.153579] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:05:48.302 [2024-09-30 23:23:28.153652] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:05:48.302 [2024-09-30 23:23:28.153690] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:48.561 23:23:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # es=234 00:05:48.561 23:23:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:48.561 23:23:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # es=106 00:05:48.561 23:23:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # case "$es" in 00:05:48.561 23:23:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@670 -- # es=1 00:05:48.561 23:23:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:48.561 23:23:28 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:05:48.561 23:23:28 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 69586 00:05:48.561 23:23:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@950 -- # '[' -z 69586 ']' 00:05:48.561 23:23:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # kill -0 69586 00:05:48.561 23:23:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # uname 00:05:48.561 23:23:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:48.561 23:23:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 69586 00:05:48.561 killing process with pid 69586 00:05:48.561 23:23:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:48.562 23:23:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:48.562 23:23:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@968 -- # echo 'killing process with pid 69586' 00:05:48.562 23:23:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@969 -- # kill 69586 00:05:48.562 23:23:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@974 -- # wait 69586 00:05:49.130 00:05:49.130 real 0m1.851s 00:05:49.130 user 0m1.986s 00:05:49.130 sys 0m0.585s 00:05:49.130 ************************************ 00:05:49.130 END TEST exit_on_failed_rpc_init 00:05:49.130 ************************************ 00:05:49.130 23:23:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:49.130 23:23:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:49.130 23:23:28 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:49.130 00:05:49.130 real 0m14.957s 00:05:49.130 user 0m13.838s 00:05:49.130 sys 0m2.048s 00:05:49.130 23:23:28 skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:49.130 23:23:28 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:49.130 ************************************ 00:05:49.130 END TEST skip_rpc 00:05:49.130 ************************************ 00:05:49.130 23:23:28 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:49.130 23:23:28 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:49.130 23:23:28 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:49.130 23:23:28 -- common/autotest_common.sh@10 -- # set +x 00:05:49.130 ************************************ 00:05:49.130 START TEST rpc_client 00:05:49.130 ************************************ 00:05:49.130 23:23:28 rpc_client -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:49.389 * Looking for test storage... 00:05:49.389 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:05:49.389 23:23:29 rpc_client -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:49.389 23:23:29 rpc_client -- common/autotest_common.sh@1681 -- # lcov --version 00:05:49.389 23:23:29 rpc_client -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:49.389 23:23:29 rpc_client -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:49.389 23:23:29 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:49.389 23:23:29 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:49.389 23:23:29 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:49.389 23:23:29 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:05:49.389 23:23:29 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:05:49.389 23:23:29 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:05:49.389 23:23:29 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:05:49.389 23:23:29 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:05:49.389 23:23:29 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:05:49.389 23:23:29 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:05:49.389 23:23:29 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:49.389 23:23:29 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:05:49.389 23:23:29 rpc_client -- scripts/common.sh@345 -- # : 1 00:05:49.389 23:23:29 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:49.389 23:23:29 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:49.389 23:23:29 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:05:49.389 23:23:29 rpc_client -- scripts/common.sh@353 -- # local d=1 00:05:49.389 23:23:29 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:49.389 23:23:29 rpc_client -- scripts/common.sh@355 -- # echo 1 00:05:49.389 23:23:29 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:05:49.389 23:23:29 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:05:49.389 23:23:29 rpc_client -- scripts/common.sh@353 -- # local d=2 00:05:49.389 23:23:29 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:49.389 23:23:29 rpc_client -- scripts/common.sh@355 -- # echo 2 00:05:49.389 23:23:29 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:05:49.389 23:23:29 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:49.389 23:23:29 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:49.389 23:23:29 rpc_client -- scripts/common.sh@368 -- # return 0 00:05:49.389 23:23:29 rpc_client -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:49.389 23:23:29 rpc_client -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:49.389 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:49.389 --rc genhtml_branch_coverage=1 00:05:49.389 --rc genhtml_function_coverage=1 00:05:49.389 --rc genhtml_legend=1 00:05:49.389 --rc geninfo_all_blocks=1 00:05:49.389 --rc geninfo_unexecuted_blocks=1 00:05:49.389 00:05:49.389 ' 00:05:49.389 23:23:29 rpc_client -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:49.389 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:49.389 --rc genhtml_branch_coverage=1 00:05:49.389 --rc genhtml_function_coverage=1 00:05:49.389 --rc genhtml_legend=1 00:05:49.389 --rc geninfo_all_blocks=1 00:05:49.389 --rc geninfo_unexecuted_blocks=1 00:05:49.389 00:05:49.389 ' 00:05:49.389 23:23:29 rpc_client -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:49.389 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:49.389 --rc genhtml_branch_coverage=1 00:05:49.389 --rc genhtml_function_coverage=1 00:05:49.389 --rc genhtml_legend=1 00:05:49.389 --rc geninfo_all_blocks=1 00:05:49.389 --rc geninfo_unexecuted_blocks=1 00:05:49.389 00:05:49.389 ' 00:05:49.389 23:23:29 rpc_client -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:49.389 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:49.389 --rc genhtml_branch_coverage=1 00:05:49.389 --rc genhtml_function_coverage=1 00:05:49.389 --rc genhtml_legend=1 00:05:49.389 --rc geninfo_all_blocks=1 00:05:49.389 --rc geninfo_unexecuted_blocks=1 00:05:49.389 00:05:49.389 ' 00:05:49.389 23:23:29 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:05:49.389 OK 00:05:49.389 23:23:29 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:49.389 00:05:49.389 real 0m0.287s 00:05:49.389 user 0m0.153s 00:05:49.389 sys 0m0.151s 00:05:49.389 ************************************ 00:05:49.389 END TEST rpc_client 00:05:49.389 ************************************ 00:05:49.389 23:23:29 rpc_client -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:49.389 23:23:29 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:05:49.389 23:23:29 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:49.389 23:23:29 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:49.389 23:23:29 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:49.389 23:23:29 -- common/autotest_common.sh@10 -- # set +x 00:05:49.389 ************************************ 00:05:49.389 START TEST json_config 00:05:49.389 ************************************ 00:05:49.389 23:23:29 json_config -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:49.649 23:23:29 json_config -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:49.649 23:23:29 json_config -- common/autotest_common.sh@1681 -- # lcov --version 00:05:49.649 23:23:29 json_config -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:49.649 23:23:29 json_config -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:49.649 23:23:29 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:49.649 23:23:29 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:49.649 23:23:29 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:49.649 23:23:29 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:05:49.649 23:23:29 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:05:49.649 23:23:29 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:05:49.649 23:23:29 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:05:49.649 23:23:29 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:05:49.649 23:23:29 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:05:49.649 23:23:29 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:05:49.649 23:23:29 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:49.649 23:23:29 json_config -- scripts/common.sh@344 -- # case "$op" in 00:05:49.649 23:23:29 json_config -- scripts/common.sh@345 -- # : 1 00:05:49.649 23:23:29 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:49.649 23:23:29 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:49.649 23:23:29 json_config -- scripts/common.sh@365 -- # decimal 1 00:05:49.649 23:23:29 json_config -- scripts/common.sh@353 -- # local d=1 00:05:49.649 23:23:29 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:49.649 23:23:29 json_config -- scripts/common.sh@355 -- # echo 1 00:05:49.649 23:23:29 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:05:49.649 23:23:29 json_config -- scripts/common.sh@366 -- # decimal 2 00:05:49.649 23:23:29 json_config -- scripts/common.sh@353 -- # local d=2 00:05:49.649 23:23:29 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:49.649 23:23:29 json_config -- scripts/common.sh@355 -- # echo 2 00:05:49.649 23:23:29 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:05:49.649 23:23:29 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:49.649 23:23:29 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:49.649 23:23:29 json_config -- scripts/common.sh@368 -- # return 0 00:05:49.649 23:23:29 json_config -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:49.649 23:23:29 json_config -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:49.649 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:49.649 --rc genhtml_branch_coverage=1 00:05:49.649 --rc genhtml_function_coverage=1 00:05:49.649 --rc genhtml_legend=1 00:05:49.649 --rc geninfo_all_blocks=1 00:05:49.649 --rc geninfo_unexecuted_blocks=1 00:05:49.649 00:05:49.649 ' 00:05:49.649 23:23:29 json_config -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:49.649 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:49.649 --rc genhtml_branch_coverage=1 00:05:49.649 --rc genhtml_function_coverage=1 00:05:49.649 --rc genhtml_legend=1 00:05:49.649 --rc geninfo_all_blocks=1 00:05:49.649 --rc geninfo_unexecuted_blocks=1 00:05:49.649 00:05:49.649 ' 00:05:49.649 23:23:29 json_config -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:49.649 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:49.649 --rc genhtml_branch_coverage=1 00:05:49.649 --rc genhtml_function_coverage=1 00:05:49.649 --rc genhtml_legend=1 00:05:49.649 --rc geninfo_all_blocks=1 00:05:49.649 --rc geninfo_unexecuted_blocks=1 00:05:49.649 00:05:49.650 ' 00:05:49.650 23:23:29 json_config -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:49.650 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:49.650 --rc genhtml_branch_coverage=1 00:05:49.650 --rc genhtml_function_coverage=1 00:05:49.650 --rc genhtml_legend=1 00:05:49.650 --rc geninfo_all_blocks=1 00:05:49.650 --rc geninfo_unexecuted_blocks=1 00:05:49.650 00:05:49.650 ' 00:05:49.650 23:23:29 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:49.650 23:23:29 json_config -- nvmf/common.sh@7 -- # uname -s 00:05:49.650 23:23:29 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:49.650 23:23:29 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:49.650 23:23:29 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:49.650 23:23:29 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:49.650 23:23:29 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:49.650 23:23:29 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:49.650 23:23:29 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:49.650 23:23:29 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:49.650 23:23:29 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:49.650 23:23:29 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:49.650 23:23:29 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:32f16825-d8ef-4474-a5f6-58ecfae20c36 00:05:49.650 23:23:29 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=32f16825-d8ef-4474-a5f6-58ecfae20c36 00:05:49.650 23:23:29 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:49.650 23:23:29 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:49.650 23:23:29 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:49.650 23:23:29 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:49.650 23:23:29 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:49.650 23:23:29 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:05:49.650 23:23:29 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:49.650 23:23:29 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:49.650 23:23:29 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:49.650 23:23:29 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:49.650 23:23:29 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:49.650 23:23:29 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:49.650 23:23:29 json_config -- paths/export.sh@5 -- # export PATH 00:05:49.650 23:23:29 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:49.650 23:23:29 json_config -- nvmf/common.sh@51 -- # : 0 00:05:49.650 23:23:29 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:49.650 23:23:29 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:49.650 23:23:29 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:49.650 23:23:29 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:49.650 23:23:29 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:49.650 23:23:29 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:49.650 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:49.650 23:23:29 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:49.650 23:23:29 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:49.650 23:23:29 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:49.650 23:23:29 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:05:49.650 23:23:29 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:05:49.650 23:23:29 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:05:49.650 23:23:29 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:05:49.650 23:23:29 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:49.650 23:23:29 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:05:49.650 WARNING: No tests are enabled so not running JSON configuration tests 00:05:49.650 23:23:29 json_config -- json_config/json_config.sh@28 -- # exit 0 00:05:49.650 00:05:49.650 real 0m0.237s 00:05:49.650 user 0m0.145s 00:05:49.650 sys 0m0.095s 00:05:49.650 ************************************ 00:05:49.650 END TEST json_config 00:05:49.650 ************************************ 00:05:49.650 23:23:29 json_config -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:49.650 23:23:29 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:49.910 23:23:29 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:49.910 23:23:29 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:49.910 23:23:29 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:49.910 23:23:29 -- common/autotest_common.sh@10 -- # set +x 00:05:49.910 ************************************ 00:05:49.910 START TEST json_config_extra_key 00:05:49.910 ************************************ 00:05:49.910 23:23:29 json_config_extra_key -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:49.910 23:23:29 json_config_extra_key -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:49.910 23:23:29 json_config_extra_key -- common/autotest_common.sh@1681 -- # lcov --version 00:05:49.910 23:23:29 json_config_extra_key -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:49.910 23:23:29 json_config_extra_key -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:49.910 23:23:29 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:49.911 23:23:29 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:49.911 23:23:29 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:49.911 23:23:29 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:05:49.911 23:23:29 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:05:49.911 23:23:29 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:05:49.911 23:23:29 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:05:49.911 23:23:29 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:05:49.911 23:23:29 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:05:49.911 23:23:29 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:05:49.911 23:23:29 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:49.911 23:23:29 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:05:49.911 23:23:29 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:05:49.911 23:23:29 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:49.911 23:23:29 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:49.911 23:23:29 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:05:49.911 23:23:29 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:05:49.911 23:23:29 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:49.911 23:23:29 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:05:49.911 23:23:29 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:05:49.911 23:23:29 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:05:49.911 23:23:29 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:05:49.911 23:23:29 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:49.911 23:23:29 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:05:49.911 23:23:29 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:05:49.911 23:23:29 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:49.911 23:23:29 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:49.911 23:23:29 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:05:49.911 23:23:29 json_config_extra_key -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:49.911 23:23:29 json_config_extra_key -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:49.911 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:49.911 --rc genhtml_branch_coverage=1 00:05:49.911 --rc genhtml_function_coverage=1 00:05:49.911 --rc genhtml_legend=1 00:05:49.911 --rc geninfo_all_blocks=1 00:05:49.911 --rc geninfo_unexecuted_blocks=1 00:05:49.911 00:05:49.911 ' 00:05:49.911 23:23:29 json_config_extra_key -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:49.911 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:49.911 --rc genhtml_branch_coverage=1 00:05:49.911 --rc genhtml_function_coverage=1 00:05:49.911 --rc genhtml_legend=1 00:05:49.911 --rc geninfo_all_blocks=1 00:05:49.911 --rc geninfo_unexecuted_blocks=1 00:05:49.911 00:05:49.911 ' 00:05:49.911 23:23:29 json_config_extra_key -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:49.911 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:49.911 --rc genhtml_branch_coverage=1 00:05:49.911 --rc genhtml_function_coverage=1 00:05:49.911 --rc genhtml_legend=1 00:05:49.911 --rc geninfo_all_blocks=1 00:05:49.911 --rc geninfo_unexecuted_blocks=1 00:05:49.911 00:05:49.911 ' 00:05:49.911 23:23:29 json_config_extra_key -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:49.911 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:49.911 --rc genhtml_branch_coverage=1 00:05:49.911 --rc genhtml_function_coverage=1 00:05:49.911 --rc genhtml_legend=1 00:05:49.911 --rc geninfo_all_blocks=1 00:05:49.911 --rc geninfo_unexecuted_blocks=1 00:05:49.911 00:05:49.911 ' 00:05:49.911 23:23:29 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:49.911 23:23:29 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:05:49.911 23:23:29 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:49.911 23:23:29 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:49.911 23:23:29 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:49.911 23:23:29 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:49.911 23:23:29 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:49.911 23:23:29 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:49.911 23:23:29 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:49.911 23:23:29 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:49.911 23:23:29 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:49.911 23:23:29 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:49.911 23:23:29 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:32f16825-d8ef-4474-a5f6-58ecfae20c36 00:05:49.911 23:23:29 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=32f16825-d8ef-4474-a5f6-58ecfae20c36 00:05:49.911 23:23:29 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:49.911 23:23:29 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:49.911 23:23:29 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:49.911 23:23:29 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:49.911 23:23:29 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:49.911 23:23:29 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:05:49.911 23:23:29 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:49.911 23:23:29 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:49.911 23:23:29 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:49.911 23:23:29 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:49.911 23:23:29 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:49.911 23:23:29 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:49.911 23:23:29 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:05:49.911 23:23:29 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:49.911 23:23:29 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:05:49.911 23:23:29 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:49.911 23:23:29 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:49.911 23:23:29 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:50.171 23:23:29 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:50.171 23:23:29 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:50.171 23:23:29 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:50.171 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:50.171 23:23:29 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:50.171 23:23:29 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:50.171 23:23:29 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:50.171 23:23:29 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:05:50.171 23:23:29 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:05:50.171 23:23:29 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:05:50.171 23:23:29 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:50.171 23:23:29 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:05:50.171 23:23:29 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:50.171 23:23:29 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:05:50.171 23:23:29 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:05:50.171 23:23:29 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:05:50.171 23:23:29 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:50.171 23:23:29 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:05:50.171 INFO: launching applications... 00:05:50.171 23:23:29 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:50.171 23:23:29 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:05:50.171 23:23:29 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:05:50.171 23:23:29 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:50.171 23:23:29 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:50.171 23:23:29 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:05:50.171 23:23:29 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:50.171 23:23:29 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:50.171 23:23:29 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=69792 00:05:50.171 23:23:29 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:50.171 Waiting for target to run... 00:05:50.171 23:23:29 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 69792 /var/tmp/spdk_tgt.sock 00:05:50.171 23:23:29 json_config_extra_key -- common/autotest_common.sh@831 -- # '[' -z 69792 ']' 00:05:50.171 23:23:29 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:50.171 23:23:29 json_config_extra_key -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:50.171 23:23:29 json_config_extra_key -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:50.171 23:23:29 json_config_extra_key -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:50.171 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:50.171 23:23:29 json_config_extra_key -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:50.171 23:23:29 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:50.171 [2024-09-30 23:23:29.870971] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:05:50.171 [2024-09-30 23:23:29.871191] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69792 ] 00:05:50.430 [2024-09-30 23:23:30.247019] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:50.430 [2024-09-30 23:23:30.277124] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:50.998 23:23:30 json_config_extra_key -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:50.998 00:05:50.999 INFO: shutting down applications... 00:05:50.999 23:23:30 json_config_extra_key -- common/autotest_common.sh@864 -- # return 0 00:05:50.999 23:23:30 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:05:50.999 23:23:30 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:05:50.999 23:23:30 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:05:50.999 23:23:30 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:05:50.999 23:23:30 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:50.999 23:23:30 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 69792 ]] 00:05:50.999 23:23:30 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 69792 00:05:50.999 23:23:30 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:50.999 23:23:30 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:50.999 23:23:30 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 69792 00:05:50.999 23:23:30 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:51.566 23:23:31 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:51.566 23:23:31 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:51.566 23:23:31 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 69792 00:05:51.566 23:23:31 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:51.566 23:23:31 json_config_extra_key -- json_config/common.sh@43 -- # break 00:05:51.566 23:23:31 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:51.566 23:23:31 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:51.566 SPDK target shutdown done 00:05:51.566 23:23:31 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:51.566 Success 00:05:51.566 00:05:51.566 real 0m1.645s 00:05:51.566 user 0m1.334s 00:05:51.566 sys 0m0.487s 00:05:51.566 23:23:31 json_config_extra_key -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:51.566 23:23:31 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:51.566 ************************************ 00:05:51.566 END TEST json_config_extra_key 00:05:51.566 ************************************ 00:05:51.566 23:23:31 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:51.566 23:23:31 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:51.566 23:23:31 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:51.566 23:23:31 -- common/autotest_common.sh@10 -- # set +x 00:05:51.566 ************************************ 00:05:51.566 START TEST alias_rpc 00:05:51.566 ************************************ 00:05:51.566 23:23:31 alias_rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:51.566 * Looking for test storage... 00:05:51.566 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:05:51.567 23:23:31 alias_rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:51.567 23:23:31 alias_rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:05:51.567 23:23:31 alias_rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:51.826 23:23:31 alias_rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:51.826 23:23:31 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:51.826 23:23:31 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:51.826 23:23:31 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:51.826 23:23:31 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:51.826 23:23:31 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:51.826 23:23:31 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:51.826 23:23:31 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:51.826 23:23:31 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:51.826 23:23:31 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:51.826 23:23:31 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:51.826 23:23:31 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:51.826 23:23:31 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:51.826 23:23:31 alias_rpc -- scripts/common.sh@345 -- # : 1 00:05:51.826 23:23:31 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:51.826 23:23:31 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:51.826 23:23:31 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:51.826 23:23:31 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:05:51.826 23:23:31 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:51.826 23:23:31 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:05:51.826 23:23:31 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:51.826 23:23:31 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:51.826 23:23:31 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:05:51.826 23:23:31 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:51.826 23:23:31 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:05:51.826 23:23:31 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:51.826 23:23:31 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:51.826 23:23:31 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:51.826 23:23:31 alias_rpc -- scripts/common.sh@368 -- # return 0 00:05:51.826 23:23:31 alias_rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:51.826 23:23:31 alias_rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:51.826 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:51.826 --rc genhtml_branch_coverage=1 00:05:51.826 --rc genhtml_function_coverage=1 00:05:51.826 --rc genhtml_legend=1 00:05:51.826 --rc geninfo_all_blocks=1 00:05:51.826 --rc geninfo_unexecuted_blocks=1 00:05:51.826 00:05:51.826 ' 00:05:51.826 23:23:31 alias_rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:51.826 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:51.826 --rc genhtml_branch_coverage=1 00:05:51.826 --rc genhtml_function_coverage=1 00:05:51.826 --rc genhtml_legend=1 00:05:51.826 --rc geninfo_all_blocks=1 00:05:51.826 --rc geninfo_unexecuted_blocks=1 00:05:51.826 00:05:51.826 ' 00:05:51.826 23:23:31 alias_rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:51.826 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:51.826 --rc genhtml_branch_coverage=1 00:05:51.826 --rc genhtml_function_coverage=1 00:05:51.826 --rc genhtml_legend=1 00:05:51.826 --rc geninfo_all_blocks=1 00:05:51.826 --rc geninfo_unexecuted_blocks=1 00:05:51.826 00:05:51.826 ' 00:05:51.826 23:23:31 alias_rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:51.826 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:51.826 --rc genhtml_branch_coverage=1 00:05:51.826 --rc genhtml_function_coverage=1 00:05:51.826 --rc genhtml_legend=1 00:05:51.826 --rc geninfo_all_blocks=1 00:05:51.826 --rc geninfo_unexecuted_blocks=1 00:05:51.826 00:05:51.826 ' 00:05:51.826 23:23:31 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:51.826 23:23:31 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=69860 00:05:51.826 23:23:31 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:51.826 23:23:31 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 69860 00:05:51.826 23:23:31 alias_rpc -- common/autotest_common.sh@831 -- # '[' -z 69860 ']' 00:05:51.826 23:23:31 alias_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:51.826 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:51.826 23:23:31 alias_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:51.826 23:23:31 alias_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:51.826 23:23:31 alias_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:51.826 23:23:31 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:51.826 [2024-09-30 23:23:31.571750] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:05:51.826 [2024-09-30 23:23:31.571874] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69860 ] 00:05:52.085 [2024-09-30 23:23:31.727976] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:52.085 [2024-09-30 23:23:31.771906] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:52.655 23:23:32 alias_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:52.655 23:23:32 alias_rpc -- common/autotest_common.sh@864 -- # return 0 00:05:52.655 23:23:32 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:05:52.914 23:23:32 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 69860 00:05:52.914 23:23:32 alias_rpc -- common/autotest_common.sh@950 -- # '[' -z 69860 ']' 00:05:52.914 23:23:32 alias_rpc -- common/autotest_common.sh@954 -- # kill -0 69860 00:05:52.914 23:23:32 alias_rpc -- common/autotest_common.sh@955 -- # uname 00:05:52.914 23:23:32 alias_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:52.914 23:23:32 alias_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 69860 00:05:52.914 killing process with pid 69860 00:05:52.914 23:23:32 alias_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:52.914 23:23:32 alias_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:52.914 23:23:32 alias_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 69860' 00:05:52.914 23:23:32 alias_rpc -- common/autotest_common.sh@969 -- # kill 69860 00:05:52.914 23:23:32 alias_rpc -- common/autotest_common.sh@974 -- # wait 69860 00:05:53.484 ************************************ 00:05:53.484 END TEST alias_rpc 00:05:53.484 ************************************ 00:05:53.484 00:05:53.484 real 0m1.773s 00:05:53.484 user 0m1.754s 00:05:53.484 sys 0m0.533s 00:05:53.484 23:23:33 alias_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:53.484 23:23:33 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:53.485 23:23:33 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:05:53.485 23:23:33 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:05:53.485 23:23:33 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:53.485 23:23:33 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:53.485 23:23:33 -- common/autotest_common.sh@10 -- # set +x 00:05:53.485 ************************************ 00:05:53.485 START TEST spdkcli_tcp 00:05:53.485 ************************************ 00:05:53.485 23:23:33 spdkcli_tcp -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:05:53.485 * Looking for test storage... 00:05:53.485 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:05:53.485 23:23:33 spdkcli_tcp -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:53.485 23:23:33 spdkcli_tcp -- common/autotest_common.sh@1681 -- # lcov --version 00:05:53.485 23:23:33 spdkcli_tcp -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:53.485 23:23:33 spdkcli_tcp -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:53.485 23:23:33 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:53.485 23:23:33 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:53.485 23:23:33 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:53.485 23:23:33 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:05:53.485 23:23:33 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:05:53.485 23:23:33 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:05:53.485 23:23:33 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:05:53.485 23:23:33 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:05:53.485 23:23:33 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:05:53.485 23:23:33 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:05:53.485 23:23:33 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:53.485 23:23:33 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:05:53.485 23:23:33 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:05:53.485 23:23:33 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:53.485 23:23:33 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:53.485 23:23:33 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:05:53.485 23:23:33 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:05:53.485 23:23:33 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:53.485 23:23:33 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:05:53.485 23:23:33 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:05:53.485 23:23:33 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:05:53.485 23:23:33 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:05:53.485 23:23:33 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:53.485 23:23:33 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:05:53.485 23:23:33 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:05:53.485 23:23:33 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:53.485 23:23:33 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:53.485 23:23:33 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:05:53.485 23:23:33 spdkcli_tcp -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:53.485 23:23:33 spdkcli_tcp -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:53.485 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:53.485 --rc genhtml_branch_coverage=1 00:05:53.485 --rc genhtml_function_coverage=1 00:05:53.485 --rc genhtml_legend=1 00:05:53.485 --rc geninfo_all_blocks=1 00:05:53.485 --rc geninfo_unexecuted_blocks=1 00:05:53.485 00:05:53.485 ' 00:05:53.485 23:23:33 spdkcli_tcp -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:53.485 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:53.485 --rc genhtml_branch_coverage=1 00:05:53.485 --rc genhtml_function_coverage=1 00:05:53.485 --rc genhtml_legend=1 00:05:53.485 --rc geninfo_all_blocks=1 00:05:53.485 --rc geninfo_unexecuted_blocks=1 00:05:53.485 00:05:53.485 ' 00:05:53.485 23:23:33 spdkcli_tcp -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:53.485 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:53.485 --rc genhtml_branch_coverage=1 00:05:53.485 --rc genhtml_function_coverage=1 00:05:53.485 --rc genhtml_legend=1 00:05:53.485 --rc geninfo_all_blocks=1 00:05:53.485 --rc geninfo_unexecuted_blocks=1 00:05:53.485 00:05:53.485 ' 00:05:53.485 23:23:33 spdkcli_tcp -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:53.485 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:53.485 --rc genhtml_branch_coverage=1 00:05:53.485 --rc genhtml_function_coverage=1 00:05:53.485 --rc genhtml_legend=1 00:05:53.485 --rc geninfo_all_blocks=1 00:05:53.485 --rc geninfo_unexecuted_blocks=1 00:05:53.485 00:05:53.485 ' 00:05:53.485 23:23:33 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:05:53.485 23:23:33 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:05:53.485 23:23:33 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:05:53.485 23:23:33 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:53.485 23:23:33 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:53.485 23:23:33 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:53.485 23:23:33 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:53.485 23:23:33 spdkcli_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:53.485 23:23:33 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:53.485 23:23:33 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=69945 00:05:53.485 23:23:33 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 69945 00:05:53.485 23:23:33 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:53.485 23:23:33 spdkcli_tcp -- common/autotest_common.sh@831 -- # '[' -z 69945 ']' 00:05:53.485 23:23:33 spdkcli_tcp -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:53.485 23:23:33 spdkcli_tcp -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:53.485 23:23:33 spdkcli_tcp -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:53.485 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:53.485 23:23:33 spdkcli_tcp -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:53.485 23:23:33 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:53.745 [2024-09-30 23:23:33.421912] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:05:53.745 [2024-09-30 23:23:33.422104] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69945 ] 00:05:53.745 [2024-09-30 23:23:33.580718] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:54.005 [2024-09-30 23:23:33.623972] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:54.005 [2024-09-30 23:23:33.624039] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:05:54.576 23:23:34 spdkcli_tcp -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:54.576 23:23:34 spdkcli_tcp -- common/autotest_common.sh@864 -- # return 0 00:05:54.576 23:23:34 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=69962 00:05:54.576 23:23:34 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:54.576 23:23:34 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:54.576 [ 00:05:54.576 "bdev_malloc_delete", 00:05:54.576 "bdev_malloc_create", 00:05:54.576 "bdev_null_resize", 00:05:54.576 "bdev_null_delete", 00:05:54.576 "bdev_null_create", 00:05:54.576 "bdev_nvme_cuse_unregister", 00:05:54.576 "bdev_nvme_cuse_register", 00:05:54.576 "bdev_opal_new_user", 00:05:54.576 "bdev_opal_set_lock_state", 00:05:54.576 "bdev_opal_delete", 00:05:54.576 "bdev_opal_get_info", 00:05:54.576 "bdev_opal_create", 00:05:54.576 "bdev_nvme_opal_revert", 00:05:54.576 "bdev_nvme_opal_init", 00:05:54.576 "bdev_nvme_send_cmd", 00:05:54.576 "bdev_nvme_set_keys", 00:05:54.576 "bdev_nvme_get_path_iostat", 00:05:54.576 "bdev_nvme_get_mdns_discovery_info", 00:05:54.576 "bdev_nvme_stop_mdns_discovery", 00:05:54.576 "bdev_nvme_start_mdns_discovery", 00:05:54.576 "bdev_nvme_set_multipath_policy", 00:05:54.576 "bdev_nvme_set_preferred_path", 00:05:54.576 "bdev_nvme_get_io_paths", 00:05:54.576 "bdev_nvme_remove_error_injection", 00:05:54.576 "bdev_nvme_add_error_injection", 00:05:54.576 "bdev_nvme_get_discovery_info", 00:05:54.576 "bdev_nvme_stop_discovery", 00:05:54.576 "bdev_nvme_start_discovery", 00:05:54.576 "bdev_nvme_get_controller_health_info", 00:05:54.576 "bdev_nvme_disable_controller", 00:05:54.576 "bdev_nvme_enable_controller", 00:05:54.576 "bdev_nvme_reset_controller", 00:05:54.576 "bdev_nvme_get_transport_statistics", 00:05:54.576 "bdev_nvme_apply_firmware", 00:05:54.576 "bdev_nvme_detach_controller", 00:05:54.576 "bdev_nvme_get_controllers", 00:05:54.576 "bdev_nvme_attach_controller", 00:05:54.576 "bdev_nvme_set_hotplug", 00:05:54.576 "bdev_nvme_set_options", 00:05:54.576 "bdev_passthru_delete", 00:05:54.576 "bdev_passthru_create", 00:05:54.576 "bdev_lvol_set_parent_bdev", 00:05:54.576 "bdev_lvol_set_parent", 00:05:54.576 "bdev_lvol_check_shallow_copy", 00:05:54.576 "bdev_lvol_start_shallow_copy", 00:05:54.576 "bdev_lvol_grow_lvstore", 00:05:54.576 "bdev_lvol_get_lvols", 00:05:54.576 "bdev_lvol_get_lvstores", 00:05:54.576 "bdev_lvol_delete", 00:05:54.576 "bdev_lvol_set_read_only", 00:05:54.576 "bdev_lvol_resize", 00:05:54.576 "bdev_lvol_decouple_parent", 00:05:54.576 "bdev_lvol_inflate", 00:05:54.576 "bdev_lvol_rename", 00:05:54.576 "bdev_lvol_clone_bdev", 00:05:54.576 "bdev_lvol_clone", 00:05:54.576 "bdev_lvol_snapshot", 00:05:54.576 "bdev_lvol_create", 00:05:54.576 "bdev_lvol_delete_lvstore", 00:05:54.576 "bdev_lvol_rename_lvstore", 00:05:54.576 "bdev_lvol_create_lvstore", 00:05:54.576 "bdev_raid_set_options", 00:05:54.576 "bdev_raid_remove_base_bdev", 00:05:54.576 "bdev_raid_add_base_bdev", 00:05:54.576 "bdev_raid_delete", 00:05:54.576 "bdev_raid_create", 00:05:54.576 "bdev_raid_get_bdevs", 00:05:54.576 "bdev_error_inject_error", 00:05:54.576 "bdev_error_delete", 00:05:54.576 "bdev_error_create", 00:05:54.576 "bdev_split_delete", 00:05:54.576 "bdev_split_create", 00:05:54.576 "bdev_delay_delete", 00:05:54.576 "bdev_delay_create", 00:05:54.576 "bdev_delay_update_latency", 00:05:54.576 "bdev_zone_block_delete", 00:05:54.576 "bdev_zone_block_create", 00:05:54.576 "blobfs_create", 00:05:54.576 "blobfs_detect", 00:05:54.576 "blobfs_set_cache_size", 00:05:54.576 "bdev_aio_delete", 00:05:54.576 "bdev_aio_rescan", 00:05:54.576 "bdev_aio_create", 00:05:54.576 "bdev_ftl_set_property", 00:05:54.576 "bdev_ftl_get_properties", 00:05:54.576 "bdev_ftl_get_stats", 00:05:54.576 "bdev_ftl_unmap", 00:05:54.576 "bdev_ftl_unload", 00:05:54.576 "bdev_ftl_delete", 00:05:54.576 "bdev_ftl_load", 00:05:54.576 "bdev_ftl_create", 00:05:54.576 "bdev_virtio_attach_controller", 00:05:54.576 "bdev_virtio_scsi_get_devices", 00:05:54.576 "bdev_virtio_detach_controller", 00:05:54.576 "bdev_virtio_blk_set_hotplug", 00:05:54.576 "bdev_iscsi_delete", 00:05:54.576 "bdev_iscsi_create", 00:05:54.576 "bdev_iscsi_set_options", 00:05:54.576 "accel_error_inject_error", 00:05:54.576 "ioat_scan_accel_module", 00:05:54.576 "dsa_scan_accel_module", 00:05:54.576 "iaa_scan_accel_module", 00:05:54.576 "keyring_file_remove_key", 00:05:54.576 "keyring_file_add_key", 00:05:54.576 "keyring_linux_set_options", 00:05:54.576 "fsdev_aio_delete", 00:05:54.576 "fsdev_aio_create", 00:05:54.576 "iscsi_get_histogram", 00:05:54.576 "iscsi_enable_histogram", 00:05:54.576 "iscsi_set_options", 00:05:54.576 "iscsi_get_auth_groups", 00:05:54.576 "iscsi_auth_group_remove_secret", 00:05:54.576 "iscsi_auth_group_add_secret", 00:05:54.576 "iscsi_delete_auth_group", 00:05:54.576 "iscsi_create_auth_group", 00:05:54.576 "iscsi_set_discovery_auth", 00:05:54.576 "iscsi_get_options", 00:05:54.576 "iscsi_target_node_request_logout", 00:05:54.576 "iscsi_target_node_set_redirect", 00:05:54.576 "iscsi_target_node_set_auth", 00:05:54.576 "iscsi_target_node_add_lun", 00:05:54.576 "iscsi_get_stats", 00:05:54.576 "iscsi_get_connections", 00:05:54.576 "iscsi_portal_group_set_auth", 00:05:54.576 "iscsi_start_portal_group", 00:05:54.576 "iscsi_delete_portal_group", 00:05:54.576 "iscsi_create_portal_group", 00:05:54.576 "iscsi_get_portal_groups", 00:05:54.576 "iscsi_delete_target_node", 00:05:54.576 "iscsi_target_node_remove_pg_ig_maps", 00:05:54.576 "iscsi_target_node_add_pg_ig_maps", 00:05:54.576 "iscsi_create_target_node", 00:05:54.576 "iscsi_get_target_nodes", 00:05:54.576 "iscsi_delete_initiator_group", 00:05:54.576 "iscsi_initiator_group_remove_initiators", 00:05:54.576 "iscsi_initiator_group_add_initiators", 00:05:54.576 "iscsi_create_initiator_group", 00:05:54.576 "iscsi_get_initiator_groups", 00:05:54.576 "nvmf_set_crdt", 00:05:54.576 "nvmf_set_config", 00:05:54.576 "nvmf_set_max_subsystems", 00:05:54.576 "nvmf_stop_mdns_prr", 00:05:54.576 "nvmf_publish_mdns_prr", 00:05:54.576 "nvmf_subsystem_get_listeners", 00:05:54.576 "nvmf_subsystem_get_qpairs", 00:05:54.576 "nvmf_subsystem_get_controllers", 00:05:54.576 "nvmf_get_stats", 00:05:54.576 "nvmf_get_transports", 00:05:54.576 "nvmf_create_transport", 00:05:54.576 "nvmf_get_targets", 00:05:54.576 "nvmf_delete_target", 00:05:54.577 "nvmf_create_target", 00:05:54.577 "nvmf_subsystem_allow_any_host", 00:05:54.577 "nvmf_subsystem_set_keys", 00:05:54.577 "nvmf_subsystem_remove_host", 00:05:54.577 "nvmf_subsystem_add_host", 00:05:54.577 "nvmf_ns_remove_host", 00:05:54.577 "nvmf_ns_add_host", 00:05:54.577 "nvmf_subsystem_remove_ns", 00:05:54.577 "nvmf_subsystem_set_ns_ana_group", 00:05:54.577 "nvmf_subsystem_add_ns", 00:05:54.577 "nvmf_subsystem_listener_set_ana_state", 00:05:54.577 "nvmf_discovery_get_referrals", 00:05:54.577 "nvmf_discovery_remove_referral", 00:05:54.577 "nvmf_discovery_add_referral", 00:05:54.577 "nvmf_subsystem_remove_listener", 00:05:54.577 "nvmf_subsystem_add_listener", 00:05:54.577 "nvmf_delete_subsystem", 00:05:54.577 "nvmf_create_subsystem", 00:05:54.577 "nvmf_get_subsystems", 00:05:54.577 "env_dpdk_get_mem_stats", 00:05:54.577 "nbd_get_disks", 00:05:54.577 "nbd_stop_disk", 00:05:54.577 "nbd_start_disk", 00:05:54.577 "ublk_recover_disk", 00:05:54.577 "ublk_get_disks", 00:05:54.577 "ublk_stop_disk", 00:05:54.577 "ublk_start_disk", 00:05:54.577 "ublk_destroy_target", 00:05:54.577 "ublk_create_target", 00:05:54.577 "virtio_blk_create_transport", 00:05:54.577 "virtio_blk_get_transports", 00:05:54.577 "vhost_controller_set_coalescing", 00:05:54.577 "vhost_get_controllers", 00:05:54.577 "vhost_delete_controller", 00:05:54.577 "vhost_create_blk_controller", 00:05:54.577 "vhost_scsi_controller_remove_target", 00:05:54.577 "vhost_scsi_controller_add_target", 00:05:54.577 "vhost_start_scsi_controller", 00:05:54.577 "vhost_create_scsi_controller", 00:05:54.577 "thread_set_cpumask", 00:05:54.577 "scheduler_set_options", 00:05:54.577 "framework_get_governor", 00:05:54.577 "framework_get_scheduler", 00:05:54.577 "framework_set_scheduler", 00:05:54.577 "framework_get_reactors", 00:05:54.577 "thread_get_io_channels", 00:05:54.577 "thread_get_pollers", 00:05:54.577 "thread_get_stats", 00:05:54.577 "framework_monitor_context_switch", 00:05:54.577 "spdk_kill_instance", 00:05:54.577 "log_enable_timestamps", 00:05:54.577 "log_get_flags", 00:05:54.577 "log_clear_flag", 00:05:54.577 "log_set_flag", 00:05:54.577 "log_get_level", 00:05:54.577 "log_set_level", 00:05:54.577 "log_get_print_level", 00:05:54.577 "log_set_print_level", 00:05:54.577 "framework_enable_cpumask_locks", 00:05:54.577 "framework_disable_cpumask_locks", 00:05:54.577 "framework_wait_init", 00:05:54.577 "framework_start_init", 00:05:54.577 "scsi_get_devices", 00:05:54.577 "bdev_get_histogram", 00:05:54.577 "bdev_enable_histogram", 00:05:54.577 "bdev_set_qos_limit", 00:05:54.577 "bdev_set_qd_sampling_period", 00:05:54.577 "bdev_get_bdevs", 00:05:54.577 "bdev_reset_iostat", 00:05:54.577 "bdev_get_iostat", 00:05:54.577 "bdev_examine", 00:05:54.577 "bdev_wait_for_examine", 00:05:54.577 "bdev_set_options", 00:05:54.577 "accel_get_stats", 00:05:54.577 "accel_set_options", 00:05:54.577 "accel_set_driver", 00:05:54.577 "accel_crypto_key_destroy", 00:05:54.577 "accel_crypto_keys_get", 00:05:54.577 "accel_crypto_key_create", 00:05:54.577 "accel_assign_opc", 00:05:54.577 "accel_get_module_info", 00:05:54.577 "accel_get_opc_assignments", 00:05:54.577 "vmd_rescan", 00:05:54.577 "vmd_remove_device", 00:05:54.577 "vmd_enable", 00:05:54.577 "sock_get_default_impl", 00:05:54.577 "sock_set_default_impl", 00:05:54.577 "sock_impl_set_options", 00:05:54.577 "sock_impl_get_options", 00:05:54.577 "iobuf_get_stats", 00:05:54.577 "iobuf_set_options", 00:05:54.577 "keyring_get_keys", 00:05:54.577 "framework_get_pci_devices", 00:05:54.577 "framework_get_config", 00:05:54.577 "framework_get_subsystems", 00:05:54.577 "fsdev_set_opts", 00:05:54.577 "fsdev_get_opts", 00:05:54.577 "trace_get_info", 00:05:54.577 "trace_get_tpoint_group_mask", 00:05:54.577 "trace_disable_tpoint_group", 00:05:54.577 "trace_enable_tpoint_group", 00:05:54.577 "trace_clear_tpoint_mask", 00:05:54.577 "trace_set_tpoint_mask", 00:05:54.577 "notify_get_notifications", 00:05:54.577 "notify_get_types", 00:05:54.577 "spdk_get_version", 00:05:54.577 "rpc_get_methods" 00:05:54.577 ] 00:05:54.577 23:23:34 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:54.577 23:23:34 spdkcli_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:54.577 23:23:34 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:54.836 23:23:34 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:54.836 23:23:34 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 69945 00:05:54.836 23:23:34 spdkcli_tcp -- common/autotest_common.sh@950 -- # '[' -z 69945 ']' 00:05:54.836 23:23:34 spdkcli_tcp -- common/autotest_common.sh@954 -- # kill -0 69945 00:05:54.836 23:23:34 spdkcli_tcp -- common/autotest_common.sh@955 -- # uname 00:05:54.836 23:23:34 spdkcli_tcp -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:54.836 23:23:34 spdkcli_tcp -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 69945 00:05:54.836 23:23:34 spdkcli_tcp -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:54.836 23:23:34 spdkcli_tcp -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:54.836 23:23:34 spdkcli_tcp -- common/autotest_common.sh@968 -- # echo 'killing process with pid 69945' 00:05:54.836 killing process with pid 69945 00:05:54.836 23:23:34 spdkcli_tcp -- common/autotest_common.sh@969 -- # kill 69945 00:05:54.836 23:23:34 spdkcli_tcp -- common/autotest_common.sh@974 -- # wait 69945 00:05:55.096 00:05:55.096 real 0m1.781s 00:05:55.096 user 0m2.908s 00:05:55.096 sys 0m0.548s 00:05:55.096 23:23:34 spdkcli_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:55.096 23:23:34 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:55.096 ************************************ 00:05:55.096 END TEST spdkcli_tcp 00:05:55.096 ************************************ 00:05:55.096 23:23:34 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:55.096 23:23:34 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:55.096 23:23:34 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:55.096 23:23:34 -- common/autotest_common.sh@10 -- # set +x 00:05:55.096 ************************************ 00:05:55.096 START TEST dpdk_mem_utility 00:05:55.096 ************************************ 00:05:55.096 23:23:34 dpdk_mem_utility -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:55.355 * Looking for test storage... 00:05:55.355 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:05:55.355 23:23:35 dpdk_mem_utility -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:55.355 23:23:35 dpdk_mem_utility -- common/autotest_common.sh@1681 -- # lcov --version 00:05:55.355 23:23:35 dpdk_mem_utility -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:55.355 23:23:35 dpdk_mem_utility -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:55.355 23:23:35 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:55.355 23:23:35 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:55.355 23:23:35 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:55.355 23:23:35 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:05:55.355 23:23:35 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:05:55.355 23:23:35 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:05:55.355 23:23:35 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:05:55.355 23:23:35 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:05:55.355 23:23:35 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:05:55.355 23:23:35 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:05:55.355 23:23:35 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:55.355 23:23:35 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:05:55.355 23:23:35 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:05:55.355 23:23:35 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:55.355 23:23:35 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:55.355 23:23:35 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:05:55.355 23:23:35 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:05:55.355 23:23:35 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:55.355 23:23:35 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:05:55.355 23:23:35 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:05:55.355 23:23:35 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:05:55.355 23:23:35 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:05:55.355 23:23:35 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:55.355 23:23:35 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:05:55.355 23:23:35 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:05:55.355 23:23:35 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:55.355 23:23:35 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:55.355 23:23:35 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:05:55.355 23:23:35 dpdk_mem_utility -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:55.355 23:23:35 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:55.355 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:55.355 --rc genhtml_branch_coverage=1 00:05:55.355 --rc genhtml_function_coverage=1 00:05:55.355 --rc genhtml_legend=1 00:05:55.355 --rc geninfo_all_blocks=1 00:05:55.355 --rc geninfo_unexecuted_blocks=1 00:05:55.355 00:05:55.355 ' 00:05:55.355 23:23:35 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:55.355 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:55.355 --rc genhtml_branch_coverage=1 00:05:55.355 --rc genhtml_function_coverage=1 00:05:55.355 --rc genhtml_legend=1 00:05:55.355 --rc geninfo_all_blocks=1 00:05:55.355 --rc geninfo_unexecuted_blocks=1 00:05:55.355 00:05:55.355 ' 00:05:55.355 23:23:35 dpdk_mem_utility -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:55.355 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:55.355 --rc genhtml_branch_coverage=1 00:05:55.355 --rc genhtml_function_coverage=1 00:05:55.355 --rc genhtml_legend=1 00:05:55.355 --rc geninfo_all_blocks=1 00:05:55.355 --rc geninfo_unexecuted_blocks=1 00:05:55.355 00:05:55.355 ' 00:05:55.355 23:23:35 dpdk_mem_utility -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:55.355 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:55.355 --rc genhtml_branch_coverage=1 00:05:55.355 --rc genhtml_function_coverage=1 00:05:55.355 --rc genhtml_legend=1 00:05:55.355 --rc geninfo_all_blocks=1 00:05:55.355 --rc geninfo_unexecuted_blocks=1 00:05:55.355 00:05:55.355 ' 00:05:55.355 23:23:35 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:55.355 23:23:35 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=70045 00:05:55.355 23:23:35 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:55.355 23:23:35 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 70045 00:05:55.355 23:23:35 dpdk_mem_utility -- common/autotest_common.sh@831 -- # '[' -z 70045 ']' 00:05:55.355 23:23:35 dpdk_mem_utility -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:55.355 23:23:35 dpdk_mem_utility -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:55.355 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:55.355 23:23:35 dpdk_mem_utility -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:55.355 23:23:35 dpdk_mem_utility -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:55.355 23:23:35 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:55.615 [2024-09-30 23:23:35.255707] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:05:55.615 [2024-09-30 23:23:35.255831] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70045 ] 00:05:55.615 [2024-09-30 23:23:35.416842] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:55.615 [2024-09-30 23:23:35.460323] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:56.554 23:23:36 dpdk_mem_utility -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:56.554 23:23:36 dpdk_mem_utility -- common/autotest_common.sh@864 -- # return 0 00:05:56.554 23:23:36 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:56.554 23:23:36 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:56.554 23:23:36 dpdk_mem_utility -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:56.554 23:23:36 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:56.554 { 00:05:56.554 "filename": "/tmp/spdk_mem_dump.txt" 00:05:56.554 } 00:05:56.554 23:23:36 dpdk_mem_utility -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:56.554 23:23:36 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:56.554 DPDK memory size 860.000000 MiB in 1 heap(s) 00:05:56.554 1 heaps totaling size 860.000000 MiB 00:05:56.554 size: 860.000000 MiB heap id: 0 00:05:56.554 end heaps---------- 00:05:56.554 9 mempools totaling size 642.649841 MiB 00:05:56.554 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:56.554 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:56.554 size: 92.545471 MiB name: bdev_io_70045 00:05:56.554 size: 51.011292 MiB name: evtpool_70045 00:05:56.554 size: 50.003479 MiB name: msgpool_70045 00:05:56.554 size: 36.509338 MiB name: fsdev_io_70045 00:05:56.554 size: 21.763794 MiB name: PDU_Pool 00:05:56.554 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:56.554 size: 0.026123 MiB name: Session_Pool 00:05:56.554 end mempools------- 00:05:56.554 6 memzones totaling size 4.142822 MiB 00:05:56.554 size: 1.000366 MiB name: RG_ring_0_70045 00:05:56.554 size: 1.000366 MiB name: RG_ring_1_70045 00:05:56.554 size: 1.000366 MiB name: RG_ring_4_70045 00:05:56.554 size: 1.000366 MiB name: RG_ring_5_70045 00:05:56.554 size: 0.125366 MiB name: RG_ring_2_70045 00:05:56.554 size: 0.015991 MiB name: RG_ring_3_70045 00:05:56.554 end memzones------- 00:05:56.554 23:23:36 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:05:56.554 heap id: 0 total size: 860.000000 MiB number of busy elements: 310 number of free elements: 16 00:05:56.554 list of free elements. size: 13.935974 MiB 00:05:56.554 element at address: 0x200000400000 with size: 1.999512 MiB 00:05:56.554 element at address: 0x200000800000 with size: 1.996948 MiB 00:05:56.554 element at address: 0x20001bc00000 with size: 0.999878 MiB 00:05:56.554 element at address: 0x20001be00000 with size: 0.999878 MiB 00:05:56.554 element at address: 0x200034a00000 with size: 0.994446 MiB 00:05:56.554 element at address: 0x200009600000 with size: 0.959839 MiB 00:05:56.554 element at address: 0x200015e00000 with size: 0.954285 MiB 00:05:56.554 element at address: 0x20001c000000 with size: 0.936584 MiB 00:05:56.554 element at address: 0x200000200000 with size: 0.835022 MiB 00:05:56.554 element at address: 0x20001d800000 with size: 0.567688 MiB 00:05:56.554 element at address: 0x20000d800000 with size: 0.489258 MiB 00:05:56.554 element at address: 0x200003e00000 with size: 0.488098 MiB 00:05:56.554 element at address: 0x20001c200000 with size: 0.485657 MiB 00:05:56.554 element at address: 0x200007000000 with size: 0.480286 MiB 00:05:56.554 element at address: 0x20002ac00000 with size: 0.395752 MiB 00:05:56.554 element at address: 0x200003a00000 with size: 0.352844 MiB 00:05:56.554 list of standard malloc elements. size: 199.267334 MiB 00:05:56.554 element at address: 0x20000d9fff80 with size: 132.000122 MiB 00:05:56.554 element at address: 0x2000097fff80 with size: 64.000122 MiB 00:05:56.554 element at address: 0x20001bcfff80 with size: 1.000122 MiB 00:05:56.554 element at address: 0x20001befff80 with size: 1.000122 MiB 00:05:56.554 element at address: 0x20001c0fff80 with size: 1.000122 MiB 00:05:56.554 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:05:56.554 element at address: 0x20001c0eff00 with size: 0.062622 MiB 00:05:56.554 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:05:56.554 element at address: 0x20001c0efdc0 with size: 0.000305 MiB 00:05:56.554 element at address: 0x2000002d5c40 with size: 0.000183 MiB 00:05:56.554 element at address: 0x2000002d5d00 with size: 0.000183 MiB 00:05:56.554 element at address: 0x2000002d5dc0 with size: 0.000183 MiB 00:05:56.554 element at address: 0x2000002d5e80 with size: 0.000183 MiB 00:05:56.554 element at address: 0x2000002d5f40 with size: 0.000183 MiB 00:05:56.554 element at address: 0x2000002d6000 with size: 0.000183 MiB 00:05:56.554 element at address: 0x2000002d60c0 with size: 0.000183 MiB 00:05:56.554 element at address: 0x2000002d6180 with size: 0.000183 MiB 00:05:56.554 element at address: 0x2000002d6240 with size: 0.000183 MiB 00:05:56.554 element at address: 0x2000002d6300 with size: 0.000183 MiB 00:05:56.554 element at address: 0x2000002d63c0 with size: 0.000183 MiB 00:05:56.554 element at address: 0x2000002d6480 with size: 0.000183 MiB 00:05:56.554 element at address: 0x2000002d6540 with size: 0.000183 MiB 00:05:56.554 element at address: 0x2000002d6600 with size: 0.000183 MiB 00:05:56.554 element at address: 0x2000002d66c0 with size: 0.000183 MiB 00:05:56.554 element at address: 0x2000002d68c0 with size: 0.000183 MiB 00:05:56.554 element at address: 0x2000002d6980 with size: 0.000183 MiB 00:05:56.554 element at address: 0x2000002d6a40 with size: 0.000183 MiB 00:05:56.554 element at address: 0x2000002d6b00 with size: 0.000183 MiB 00:05:56.554 element at address: 0x2000002d6bc0 with size: 0.000183 MiB 00:05:56.554 element at address: 0x2000002d6c80 with size: 0.000183 MiB 00:05:56.554 element at address: 0x2000002d6d40 with size: 0.000183 MiB 00:05:56.554 element at address: 0x2000002d6e00 with size: 0.000183 MiB 00:05:56.554 element at address: 0x2000002d6ec0 with size: 0.000183 MiB 00:05:56.554 element at address: 0x2000002d6f80 with size: 0.000183 MiB 00:05:56.554 element at address: 0x2000002d7040 with size: 0.000183 MiB 00:05:56.554 element at address: 0x2000002d7100 with size: 0.000183 MiB 00:05:56.554 element at address: 0x2000002d71c0 with size: 0.000183 MiB 00:05:56.554 element at address: 0x2000002d7280 with size: 0.000183 MiB 00:05:56.554 element at address: 0x2000002d7340 with size: 0.000183 MiB 00:05:56.554 element at address: 0x2000002d7400 with size: 0.000183 MiB 00:05:56.554 element at address: 0x2000002d74c0 with size: 0.000183 MiB 00:05:56.554 element at address: 0x2000002d7580 with size: 0.000183 MiB 00:05:56.554 element at address: 0x2000002d7640 with size: 0.000183 MiB 00:05:56.554 element at address: 0x2000002d7700 with size: 0.000183 MiB 00:05:56.554 element at address: 0x2000002d77c0 with size: 0.000183 MiB 00:05:56.554 element at address: 0x2000002d7880 with size: 0.000183 MiB 00:05:56.554 element at address: 0x2000002d7940 with size: 0.000183 MiB 00:05:56.554 element at address: 0x2000002d7a00 with size: 0.000183 MiB 00:05:56.554 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:05:56.554 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:05:56.554 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:05:56.554 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:05:56.554 element at address: 0x200003a5a540 with size: 0.000183 MiB 00:05:56.554 element at address: 0x200003a5a740 with size: 0.000183 MiB 00:05:56.554 element at address: 0x200003a5ea00 with size: 0.000183 MiB 00:05:56.554 element at address: 0x200003a7ecc0 with size: 0.000183 MiB 00:05:56.554 element at address: 0x200003a7ed80 with size: 0.000183 MiB 00:05:56.554 element at address: 0x200003a7ee40 with size: 0.000183 MiB 00:05:56.555 element at address: 0x200003a7ef00 with size: 0.000183 MiB 00:05:56.555 element at address: 0x200003a7efc0 with size: 0.000183 MiB 00:05:56.555 element at address: 0x200003a7f080 with size: 0.000183 MiB 00:05:56.555 element at address: 0x200003a7f140 with size: 0.000183 MiB 00:05:56.555 element at address: 0x200003a7f200 with size: 0.000183 MiB 00:05:56.555 element at address: 0x200003a7f2c0 with size: 0.000183 MiB 00:05:56.555 element at address: 0x200003a7f380 with size: 0.000183 MiB 00:05:56.555 element at address: 0x200003a7f440 with size: 0.000183 MiB 00:05:56.555 element at address: 0x200003a7f500 with size: 0.000183 MiB 00:05:56.555 element at address: 0x200003a7f5c0 with size: 0.000183 MiB 00:05:56.555 element at address: 0x200003aff880 with size: 0.000183 MiB 00:05:56.555 element at address: 0x200003affa80 with size: 0.000183 MiB 00:05:56.555 element at address: 0x200003affb40 with size: 0.000183 MiB 00:05:56.555 element at address: 0x200003e7cf40 with size: 0.000183 MiB 00:05:56.555 element at address: 0x200003e7d000 with size: 0.000183 MiB 00:05:56.555 element at address: 0x200003e7d0c0 with size: 0.000183 MiB 00:05:56.555 element at address: 0x200003e7d180 with size: 0.000183 MiB 00:05:56.555 element at address: 0x200003e7d240 with size: 0.000183 MiB 00:05:56.555 element at address: 0x200003e7d300 with size: 0.000183 MiB 00:05:56.555 element at address: 0x200003e7d3c0 with size: 0.000183 MiB 00:05:56.555 element at address: 0x200003e7d480 with size: 0.000183 MiB 00:05:56.555 element at address: 0x200003e7d540 with size: 0.000183 MiB 00:05:56.555 element at address: 0x200003e7d600 with size: 0.000183 MiB 00:05:56.555 element at address: 0x200003e7d6c0 with size: 0.000183 MiB 00:05:56.555 element at address: 0x200003e7d780 with size: 0.000183 MiB 00:05:56.555 element at address: 0x200003e7d840 with size: 0.000183 MiB 00:05:56.555 element at address: 0x200003e7d900 with size: 0.000183 MiB 00:05:56.555 element at address: 0x200003e7d9c0 with size: 0.000183 MiB 00:05:56.555 element at address: 0x200003e7da80 with size: 0.000183 MiB 00:05:56.555 element at address: 0x200003e7db40 with size: 0.000183 MiB 00:05:56.555 element at address: 0x200003e7dc00 with size: 0.000183 MiB 00:05:56.555 element at address: 0x200003e7dcc0 with size: 0.000183 MiB 00:05:56.555 element at address: 0x200003e7dd80 with size: 0.000183 MiB 00:05:56.555 element at address: 0x200003e7de40 with size: 0.000183 MiB 00:05:56.555 element at address: 0x200003e7df00 with size: 0.000183 MiB 00:05:56.555 element at address: 0x200003e7dfc0 with size: 0.000183 MiB 00:05:56.555 element at address: 0x200003e7e080 with size: 0.000183 MiB 00:05:56.555 element at address: 0x200003e7e140 with size: 0.000183 MiB 00:05:56.555 element at address: 0x200003e7e200 with size: 0.000183 MiB 00:05:56.555 element at address: 0x200003e7e2c0 with size: 0.000183 MiB 00:05:56.555 element at address: 0x200003e7e380 with size: 0.000183 MiB 00:05:56.555 element at address: 0x200003e7e440 with size: 0.000183 MiB 00:05:56.555 element at address: 0x200003e7e500 with size: 0.000183 MiB 00:05:56.555 element at address: 0x200003e7e5c0 with size: 0.000183 MiB 00:05:56.555 element at address: 0x200003e7e680 with size: 0.000183 MiB 00:05:56.555 element at address: 0x200003e7e740 with size: 0.000183 MiB 00:05:56.555 element at address: 0x200003e7e800 with size: 0.000183 MiB 00:05:56.555 element at address: 0x200003e7e8c0 with size: 0.000183 MiB 00:05:56.555 element at address: 0x200003e7e980 with size: 0.000183 MiB 00:05:56.555 element at address: 0x200003e7ea40 with size: 0.000183 MiB 00:05:56.555 element at address: 0x200003e7eb00 with size: 0.000183 MiB 00:05:56.555 element at address: 0x200003e7ebc0 with size: 0.000183 MiB 00:05:56.555 element at address: 0x200003e7ec80 with size: 0.000183 MiB 00:05:56.555 element at address: 0x200003e7ed40 with size: 0.000183 MiB 00:05:56.555 element at address: 0x200003e7ee00 with size: 0.000183 MiB 00:05:56.555 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:05:56.555 element at address: 0x20000707af40 with size: 0.000183 MiB 00:05:56.555 element at address: 0x20000707b000 with size: 0.000183 MiB 00:05:56.555 element at address: 0x20000707b0c0 with size: 0.000183 MiB 00:05:56.555 element at address: 0x20000707b180 with size: 0.000183 MiB 00:05:56.555 element at address: 0x20000707b240 with size: 0.000183 MiB 00:05:56.555 element at address: 0x20000707b300 with size: 0.000183 MiB 00:05:56.555 element at address: 0x20000707b3c0 with size: 0.000183 MiB 00:05:56.555 element at address: 0x20000707b480 with size: 0.000183 MiB 00:05:56.555 element at address: 0x20000707b540 with size: 0.000183 MiB 00:05:56.555 element at address: 0x20000707b600 with size: 0.000183 MiB 00:05:56.555 element at address: 0x20000707b6c0 with size: 0.000183 MiB 00:05:56.555 element at address: 0x2000070fb980 with size: 0.000183 MiB 00:05:56.555 element at address: 0x2000096fdd80 with size: 0.000183 MiB 00:05:56.555 element at address: 0x20000d87d400 with size: 0.000183 MiB 00:05:56.555 element at address: 0x20000d87d4c0 with size: 0.000183 MiB 00:05:56.555 element at address: 0x20000d87d580 with size: 0.000183 MiB 00:05:56.555 element at address: 0x20000d87d640 with size: 0.000183 MiB 00:05:56.555 element at address: 0x20000d87d700 with size: 0.000183 MiB 00:05:56.555 element at address: 0x20000d87d7c0 with size: 0.000183 MiB 00:05:56.555 element at address: 0x20000d87d880 with size: 0.000183 MiB 00:05:56.555 element at address: 0x20000d87d940 with size: 0.000183 MiB 00:05:56.555 element at address: 0x20000d87da00 with size: 0.000183 MiB 00:05:56.555 element at address: 0x20000d87dac0 with size: 0.000183 MiB 00:05:56.555 element at address: 0x20000d8fdd80 with size: 0.000183 MiB 00:05:56.555 element at address: 0x200015ef44c0 with size: 0.000183 MiB 00:05:56.555 element at address: 0x20001c0efc40 with size: 0.000183 MiB 00:05:56.555 element at address: 0x20001c0efd00 with size: 0.000183 MiB 00:05:56.555 element at address: 0x20001c2bc740 with size: 0.000183 MiB 00:05:56.555 element at address: 0x20001d891540 with size: 0.000183 MiB 00:05:56.555 element at address: 0x20001d891600 with size: 0.000183 MiB 00:05:56.555 element at address: 0x20001d8916c0 with size: 0.000183 MiB 00:05:56.555 element at address: 0x20001d891780 with size: 0.000183 MiB 00:05:56.555 element at address: 0x20001d891840 with size: 0.000183 MiB 00:05:56.555 element at address: 0x20001d891900 with size: 0.000183 MiB 00:05:56.555 element at address: 0x20001d8919c0 with size: 0.000183 MiB 00:05:56.555 element at address: 0x20001d891a80 with size: 0.000183 MiB 00:05:56.555 element at address: 0x20001d891b40 with size: 0.000183 MiB 00:05:56.555 element at address: 0x20001d891c00 with size: 0.000183 MiB 00:05:56.555 element at address: 0x20001d891cc0 with size: 0.000183 MiB 00:05:56.555 element at address: 0x20001d891d80 with size: 0.000183 MiB 00:05:56.555 element at address: 0x20001d891e40 with size: 0.000183 MiB 00:05:56.555 element at address: 0x20001d891f00 with size: 0.000183 MiB 00:05:56.555 element at address: 0x20001d891fc0 with size: 0.000183 MiB 00:05:56.555 element at address: 0x20001d892080 with size: 0.000183 MiB 00:05:56.555 element at address: 0x20001d892140 with size: 0.000183 MiB 00:05:56.555 element at address: 0x20001d892200 with size: 0.000183 MiB 00:05:56.555 element at address: 0x20001d8922c0 with size: 0.000183 MiB 00:05:56.555 element at address: 0x20001d892380 with size: 0.000183 MiB 00:05:56.555 element at address: 0x20001d892440 with size: 0.000183 MiB 00:05:56.555 element at address: 0x20001d892500 with size: 0.000183 MiB 00:05:56.555 element at address: 0x20001d8925c0 with size: 0.000183 MiB 00:05:56.555 element at address: 0x20001d892680 with size: 0.000183 MiB 00:05:56.555 element at address: 0x20001d892740 with size: 0.000183 MiB 00:05:56.555 element at address: 0x20001d892800 with size: 0.000183 MiB 00:05:56.555 element at address: 0x20001d8928c0 with size: 0.000183 MiB 00:05:56.555 element at address: 0x20001d892980 with size: 0.000183 MiB 00:05:56.555 element at address: 0x20001d892a40 with size: 0.000183 MiB 00:05:56.555 element at address: 0x20001d892b00 with size: 0.000183 MiB 00:05:56.555 element at address: 0x20001d892bc0 with size: 0.000183 MiB 00:05:56.555 element at address: 0x20001d892c80 with size: 0.000183 MiB 00:05:56.555 element at address: 0x20001d892d40 with size: 0.000183 MiB 00:05:56.555 element at address: 0x20001d892e00 with size: 0.000183 MiB 00:05:56.555 element at address: 0x20001d892ec0 with size: 0.000183 MiB 00:05:56.555 element at address: 0x20001d892f80 with size: 0.000183 MiB 00:05:56.555 element at address: 0x20001d893040 with size: 0.000183 MiB 00:05:56.555 element at address: 0x20001d893100 with size: 0.000183 MiB 00:05:56.555 element at address: 0x20001d8931c0 with size: 0.000183 MiB 00:05:56.555 element at address: 0x20001d893280 with size: 0.000183 MiB 00:05:56.555 element at address: 0x20001d893340 with size: 0.000183 MiB 00:05:56.555 element at address: 0x20001d893400 with size: 0.000183 MiB 00:05:56.555 element at address: 0x20001d8934c0 with size: 0.000183 MiB 00:05:56.555 element at address: 0x20001d893580 with size: 0.000183 MiB 00:05:56.555 element at address: 0x20001d893640 with size: 0.000183 MiB 00:05:56.555 element at address: 0x20001d893700 with size: 0.000183 MiB 00:05:56.555 element at address: 0x20001d8937c0 with size: 0.000183 MiB 00:05:56.555 element at address: 0x20001d893880 with size: 0.000183 MiB 00:05:56.555 element at address: 0x20001d893940 with size: 0.000183 MiB 00:05:56.555 element at address: 0x20001d893a00 with size: 0.000183 MiB 00:05:56.555 element at address: 0x20001d893ac0 with size: 0.000183 MiB 00:05:56.555 element at address: 0x20001d893b80 with size: 0.000183 MiB 00:05:56.555 element at address: 0x20001d893c40 with size: 0.000183 MiB 00:05:56.555 element at address: 0x20001d893d00 with size: 0.000183 MiB 00:05:56.555 element at address: 0x20001d893dc0 with size: 0.000183 MiB 00:05:56.555 element at address: 0x20001d893e80 with size: 0.000183 MiB 00:05:56.555 element at address: 0x20001d893f40 with size: 0.000183 MiB 00:05:56.555 element at address: 0x20001d894000 with size: 0.000183 MiB 00:05:56.555 element at address: 0x20001d8940c0 with size: 0.000183 MiB 00:05:56.555 element at address: 0x20001d894180 with size: 0.000183 MiB 00:05:56.555 element at address: 0x20001d894240 with size: 0.000183 MiB 00:05:56.555 element at address: 0x20001d894300 with size: 0.000183 MiB 00:05:56.555 element at address: 0x20001d8943c0 with size: 0.000183 MiB 00:05:56.555 element at address: 0x20001d894480 with size: 0.000183 MiB 00:05:56.555 element at address: 0x20001d894540 with size: 0.000183 MiB 00:05:56.555 element at address: 0x20001d894600 with size: 0.000183 MiB 00:05:56.555 element at address: 0x20001d8946c0 with size: 0.000183 MiB 00:05:56.555 element at address: 0x20001d894780 with size: 0.000183 MiB 00:05:56.555 element at address: 0x20001d894840 with size: 0.000183 MiB 00:05:56.556 element at address: 0x20001d894900 with size: 0.000183 MiB 00:05:56.556 element at address: 0x20001d8949c0 with size: 0.000183 MiB 00:05:56.556 element at address: 0x20001d894a80 with size: 0.000183 MiB 00:05:56.556 element at address: 0x20001d894b40 with size: 0.000183 MiB 00:05:56.556 element at address: 0x20001d894c00 with size: 0.000183 MiB 00:05:56.556 element at address: 0x20001d894cc0 with size: 0.000183 MiB 00:05:56.556 element at address: 0x20001d894d80 with size: 0.000183 MiB 00:05:56.556 element at address: 0x20001d894e40 with size: 0.000183 MiB 00:05:56.556 element at address: 0x20001d894f00 with size: 0.000183 MiB 00:05:56.556 element at address: 0x20001d894fc0 with size: 0.000183 MiB 00:05:56.556 element at address: 0x20001d895080 with size: 0.000183 MiB 00:05:56.556 element at address: 0x20001d895140 with size: 0.000183 MiB 00:05:56.556 element at address: 0x20001d895200 with size: 0.000183 MiB 00:05:56.556 element at address: 0x20001d8952c0 with size: 0.000183 MiB 00:05:56.556 element at address: 0x20001d895380 with size: 0.000183 MiB 00:05:56.556 element at address: 0x20001d895440 with size: 0.000183 MiB 00:05:56.556 element at address: 0x20002ac65500 with size: 0.000183 MiB 00:05:56.556 element at address: 0x20002ac655c0 with size: 0.000183 MiB 00:05:56.556 element at address: 0x20002ac6c1c0 with size: 0.000183 MiB 00:05:56.556 element at address: 0x20002ac6c3c0 with size: 0.000183 MiB 00:05:56.556 element at address: 0x20002ac6c480 with size: 0.000183 MiB 00:05:56.556 element at address: 0x20002ac6c540 with size: 0.000183 MiB 00:05:56.556 element at address: 0x20002ac6c600 with size: 0.000183 MiB 00:05:56.556 element at address: 0x20002ac6c6c0 with size: 0.000183 MiB 00:05:56.556 element at address: 0x20002ac6c780 with size: 0.000183 MiB 00:05:56.556 element at address: 0x20002ac6c840 with size: 0.000183 MiB 00:05:56.556 element at address: 0x20002ac6c900 with size: 0.000183 MiB 00:05:56.556 element at address: 0x20002ac6c9c0 with size: 0.000183 MiB 00:05:56.556 element at address: 0x20002ac6ca80 with size: 0.000183 MiB 00:05:56.556 element at address: 0x20002ac6cb40 with size: 0.000183 MiB 00:05:56.556 element at address: 0x20002ac6cc00 with size: 0.000183 MiB 00:05:56.556 element at address: 0x20002ac6ccc0 with size: 0.000183 MiB 00:05:56.556 element at address: 0x20002ac6cd80 with size: 0.000183 MiB 00:05:56.556 element at address: 0x20002ac6ce40 with size: 0.000183 MiB 00:05:56.556 element at address: 0x20002ac6cf00 with size: 0.000183 MiB 00:05:56.556 element at address: 0x20002ac6cfc0 with size: 0.000183 MiB 00:05:56.556 element at address: 0x20002ac6d080 with size: 0.000183 MiB 00:05:56.556 element at address: 0x20002ac6d140 with size: 0.000183 MiB 00:05:56.556 element at address: 0x20002ac6d200 with size: 0.000183 MiB 00:05:56.556 element at address: 0x20002ac6d2c0 with size: 0.000183 MiB 00:05:56.556 element at address: 0x20002ac6d380 with size: 0.000183 MiB 00:05:56.556 element at address: 0x20002ac6d440 with size: 0.000183 MiB 00:05:56.556 element at address: 0x20002ac6d500 with size: 0.000183 MiB 00:05:56.556 element at address: 0x20002ac6d5c0 with size: 0.000183 MiB 00:05:56.556 element at address: 0x20002ac6d680 with size: 0.000183 MiB 00:05:56.556 element at address: 0x20002ac6d740 with size: 0.000183 MiB 00:05:56.556 element at address: 0x20002ac6d800 with size: 0.000183 MiB 00:05:56.556 element at address: 0x20002ac6d8c0 with size: 0.000183 MiB 00:05:56.556 element at address: 0x20002ac6d980 with size: 0.000183 MiB 00:05:56.556 element at address: 0x20002ac6da40 with size: 0.000183 MiB 00:05:56.556 element at address: 0x20002ac6db00 with size: 0.000183 MiB 00:05:56.556 element at address: 0x20002ac6dbc0 with size: 0.000183 MiB 00:05:56.556 element at address: 0x20002ac6dc80 with size: 0.000183 MiB 00:05:56.556 element at address: 0x20002ac6dd40 with size: 0.000183 MiB 00:05:56.556 element at address: 0x20002ac6de00 with size: 0.000183 MiB 00:05:56.556 element at address: 0x20002ac6dec0 with size: 0.000183 MiB 00:05:56.556 element at address: 0x20002ac6df80 with size: 0.000183 MiB 00:05:56.556 element at address: 0x20002ac6e040 with size: 0.000183 MiB 00:05:56.556 element at address: 0x20002ac6e100 with size: 0.000183 MiB 00:05:56.556 element at address: 0x20002ac6e1c0 with size: 0.000183 MiB 00:05:56.556 element at address: 0x20002ac6e280 with size: 0.000183 MiB 00:05:56.556 element at address: 0x20002ac6e340 with size: 0.000183 MiB 00:05:56.556 element at address: 0x20002ac6e400 with size: 0.000183 MiB 00:05:56.556 element at address: 0x20002ac6e4c0 with size: 0.000183 MiB 00:05:56.556 element at address: 0x20002ac6e580 with size: 0.000183 MiB 00:05:56.556 element at address: 0x20002ac6e640 with size: 0.000183 MiB 00:05:56.556 element at address: 0x20002ac6e700 with size: 0.000183 MiB 00:05:56.556 element at address: 0x20002ac6e7c0 with size: 0.000183 MiB 00:05:56.556 element at address: 0x20002ac6e880 with size: 0.000183 MiB 00:05:56.556 element at address: 0x20002ac6e940 with size: 0.000183 MiB 00:05:56.556 element at address: 0x20002ac6ea00 with size: 0.000183 MiB 00:05:56.556 element at address: 0x20002ac6eac0 with size: 0.000183 MiB 00:05:56.556 element at address: 0x20002ac6eb80 with size: 0.000183 MiB 00:05:56.556 element at address: 0x20002ac6ec40 with size: 0.000183 MiB 00:05:56.556 element at address: 0x20002ac6ed00 with size: 0.000183 MiB 00:05:56.556 element at address: 0x20002ac6edc0 with size: 0.000183 MiB 00:05:56.556 element at address: 0x20002ac6ee80 with size: 0.000183 MiB 00:05:56.556 element at address: 0x20002ac6ef40 with size: 0.000183 MiB 00:05:56.556 element at address: 0x20002ac6f000 with size: 0.000183 MiB 00:05:56.556 element at address: 0x20002ac6f0c0 with size: 0.000183 MiB 00:05:56.556 element at address: 0x20002ac6f180 with size: 0.000183 MiB 00:05:56.556 element at address: 0x20002ac6f240 with size: 0.000183 MiB 00:05:56.556 element at address: 0x20002ac6f300 with size: 0.000183 MiB 00:05:56.556 element at address: 0x20002ac6f3c0 with size: 0.000183 MiB 00:05:56.556 element at address: 0x20002ac6f480 with size: 0.000183 MiB 00:05:56.556 element at address: 0x20002ac6f540 with size: 0.000183 MiB 00:05:56.556 element at address: 0x20002ac6f600 with size: 0.000183 MiB 00:05:56.556 element at address: 0x20002ac6f6c0 with size: 0.000183 MiB 00:05:56.556 element at address: 0x20002ac6f780 with size: 0.000183 MiB 00:05:56.556 element at address: 0x20002ac6f840 with size: 0.000183 MiB 00:05:56.556 element at address: 0x20002ac6f900 with size: 0.000183 MiB 00:05:56.556 element at address: 0x20002ac6f9c0 with size: 0.000183 MiB 00:05:56.556 element at address: 0x20002ac6fa80 with size: 0.000183 MiB 00:05:56.556 element at address: 0x20002ac6fb40 with size: 0.000183 MiB 00:05:56.556 element at address: 0x20002ac6fc00 with size: 0.000183 MiB 00:05:56.556 element at address: 0x20002ac6fcc0 with size: 0.000183 MiB 00:05:56.556 element at address: 0x20002ac6fd80 with size: 0.000183 MiB 00:05:56.556 element at address: 0x20002ac6fe40 with size: 0.000183 MiB 00:05:56.556 element at address: 0x20002ac6ff00 with size: 0.000183 MiB 00:05:56.556 list of memzone associated elements. size: 646.796692 MiB 00:05:56.556 element at address: 0x20001d895500 with size: 211.416748 MiB 00:05:56.556 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:56.556 element at address: 0x20002ac6ffc0 with size: 157.562561 MiB 00:05:56.556 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:56.556 element at address: 0x200015ff4780 with size: 92.045044 MiB 00:05:56.556 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_70045_0 00:05:56.556 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:05:56.556 associated memzone info: size: 48.002930 MiB name: MP_evtpool_70045_0 00:05:56.556 element at address: 0x200003fff380 with size: 48.003052 MiB 00:05:56.556 associated memzone info: size: 48.002930 MiB name: MP_msgpool_70045_0 00:05:56.556 element at address: 0x2000071fdb80 with size: 36.008911 MiB 00:05:56.556 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_70045_0 00:05:56.556 element at address: 0x20001c3be940 with size: 20.255554 MiB 00:05:56.556 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:56.556 element at address: 0x200034bfeb40 with size: 18.005066 MiB 00:05:56.556 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:56.556 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:05:56.556 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_70045 00:05:56.556 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:05:56.556 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_70045 00:05:56.556 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:05:56.556 associated memzone info: size: 1.007996 MiB name: MP_evtpool_70045 00:05:56.556 element at address: 0x20000d8fde40 with size: 1.008118 MiB 00:05:56.556 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:56.556 element at address: 0x20001c2bc800 with size: 1.008118 MiB 00:05:56.556 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:56.556 element at address: 0x2000096fde40 with size: 1.008118 MiB 00:05:56.556 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:56.556 element at address: 0x2000070fba40 with size: 1.008118 MiB 00:05:56.556 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:56.556 element at address: 0x200003eff180 with size: 1.000488 MiB 00:05:56.556 associated memzone info: size: 1.000366 MiB name: RG_ring_0_70045 00:05:56.556 element at address: 0x200003affc00 with size: 1.000488 MiB 00:05:56.556 associated memzone info: size: 1.000366 MiB name: RG_ring_1_70045 00:05:56.556 element at address: 0x200015ef4580 with size: 1.000488 MiB 00:05:56.556 associated memzone info: size: 1.000366 MiB name: RG_ring_4_70045 00:05:56.556 element at address: 0x200034afe940 with size: 1.000488 MiB 00:05:56.556 associated memzone info: size: 1.000366 MiB name: RG_ring_5_70045 00:05:56.556 element at address: 0x200003a7f680 with size: 0.500488 MiB 00:05:56.556 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_70045 00:05:56.556 element at address: 0x200003e7eec0 with size: 0.500488 MiB 00:05:56.556 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_70045 00:05:56.556 element at address: 0x20000d87db80 with size: 0.500488 MiB 00:05:56.556 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:56.556 element at address: 0x20000707b780 with size: 0.500488 MiB 00:05:56.556 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:56.556 element at address: 0x20001c27c540 with size: 0.250488 MiB 00:05:56.556 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:56.556 element at address: 0x200003a5eac0 with size: 0.125488 MiB 00:05:56.556 associated memzone info: size: 0.125366 MiB name: RG_ring_2_70045 00:05:56.557 element at address: 0x2000096f5b80 with size: 0.031738 MiB 00:05:56.557 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:56.557 element at address: 0x20002ac65680 with size: 0.023743 MiB 00:05:56.557 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:56.557 element at address: 0x200003a5a800 with size: 0.016113 MiB 00:05:56.557 associated memzone info: size: 0.015991 MiB name: RG_ring_3_70045 00:05:56.557 element at address: 0x20002ac6b7c0 with size: 0.002441 MiB 00:05:56.557 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:56.557 element at address: 0x2000002d6780 with size: 0.000305 MiB 00:05:56.557 associated memzone info: size: 0.000183 MiB name: MP_msgpool_70045 00:05:56.557 element at address: 0x200003aff940 with size: 0.000305 MiB 00:05:56.557 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_70045 00:05:56.557 element at address: 0x200003a5a600 with size: 0.000305 MiB 00:05:56.557 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_70045 00:05:56.557 element at address: 0x20002ac6c280 with size: 0.000305 MiB 00:05:56.557 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:56.557 23:23:36 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:56.557 23:23:36 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 70045 00:05:56.557 23:23:36 dpdk_mem_utility -- common/autotest_common.sh@950 -- # '[' -z 70045 ']' 00:05:56.557 23:23:36 dpdk_mem_utility -- common/autotest_common.sh@954 -- # kill -0 70045 00:05:56.557 23:23:36 dpdk_mem_utility -- common/autotest_common.sh@955 -- # uname 00:05:56.557 23:23:36 dpdk_mem_utility -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:56.557 23:23:36 dpdk_mem_utility -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70045 00:05:56.557 23:23:36 dpdk_mem_utility -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:56.557 23:23:36 dpdk_mem_utility -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:56.557 23:23:36 dpdk_mem_utility -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70045' 00:05:56.557 killing process with pid 70045 00:05:56.557 23:23:36 dpdk_mem_utility -- common/autotest_common.sh@969 -- # kill 70045 00:05:56.557 23:23:36 dpdk_mem_utility -- common/autotest_common.sh@974 -- # wait 70045 00:05:56.816 00:05:56.816 real 0m1.655s 00:05:56.816 user 0m1.564s 00:05:56.816 sys 0m0.513s 00:05:56.816 23:23:36 dpdk_mem_utility -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:56.816 23:23:36 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:56.816 ************************************ 00:05:56.816 END TEST dpdk_mem_utility 00:05:56.816 ************************************ 00:05:56.816 23:23:36 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:56.816 23:23:36 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:56.816 23:23:36 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:56.816 23:23:36 -- common/autotest_common.sh@10 -- # set +x 00:05:56.816 ************************************ 00:05:56.816 START TEST event 00:05:56.816 ************************************ 00:05:56.816 23:23:36 event -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:57.076 * Looking for test storage... 00:05:57.076 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:57.076 23:23:36 event -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:57.076 23:23:36 event -- common/autotest_common.sh@1681 -- # lcov --version 00:05:57.076 23:23:36 event -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:57.076 23:23:36 event -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:57.076 23:23:36 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:57.076 23:23:36 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:57.076 23:23:36 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:57.076 23:23:36 event -- scripts/common.sh@336 -- # IFS=.-: 00:05:57.076 23:23:36 event -- scripts/common.sh@336 -- # read -ra ver1 00:05:57.076 23:23:36 event -- scripts/common.sh@337 -- # IFS=.-: 00:05:57.076 23:23:36 event -- scripts/common.sh@337 -- # read -ra ver2 00:05:57.076 23:23:36 event -- scripts/common.sh@338 -- # local 'op=<' 00:05:57.076 23:23:36 event -- scripts/common.sh@340 -- # ver1_l=2 00:05:57.076 23:23:36 event -- scripts/common.sh@341 -- # ver2_l=1 00:05:57.076 23:23:36 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:57.076 23:23:36 event -- scripts/common.sh@344 -- # case "$op" in 00:05:57.076 23:23:36 event -- scripts/common.sh@345 -- # : 1 00:05:57.076 23:23:36 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:57.076 23:23:36 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:57.076 23:23:36 event -- scripts/common.sh@365 -- # decimal 1 00:05:57.076 23:23:36 event -- scripts/common.sh@353 -- # local d=1 00:05:57.076 23:23:36 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:57.076 23:23:36 event -- scripts/common.sh@355 -- # echo 1 00:05:57.076 23:23:36 event -- scripts/common.sh@365 -- # ver1[v]=1 00:05:57.076 23:23:36 event -- scripts/common.sh@366 -- # decimal 2 00:05:57.076 23:23:36 event -- scripts/common.sh@353 -- # local d=2 00:05:57.076 23:23:36 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:57.076 23:23:36 event -- scripts/common.sh@355 -- # echo 2 00:05:57.076 23:23:36 event -- scripts/common.sh@366 -- # ver2[v]=2 00:05:57.076 23:23:36 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:57.076 23:23:36 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:57.076 23:23:36 event -- scripts/common.sh@368 -- # return 0 00:05:57.076 23:23:36 event -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:57.076 23:23:36 event -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:57.076 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:57.076 --rc genhtml_branch_coverage=1 00:05:57.076 --rc genhtml_function_coverage=1 00:05:57.076 --rc genhtml_legend=1 00:05:57.076 --rc geninfo_all_blocks=1 00:05:57.076 --rc geninfo_unexecuted_blocks=1 00:05:57.076 00:05:57.076 ' 00:05:57.076 23:23:36 event -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:57.076 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:57.076 --rc genhtml_branch_coverage=1 00:05:57.076 --rc genhtml_function_coverage=1 00:05:57.076 --rc genhtml_legend=1 00:05:57.076 --rc geninfo_all_blocks=1 00:05:57.076 --rc geninfo_unexecuted_blocks=1 00:05:57.076 00:05:57.076 ' 00:05:57.076 23:23:36 event -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:57.076 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:57.076 --rc genhtml_branch_coverage=1 00:05:57.076 --rc genhtml_function_coverage=1 00:05:57.076 --rc genhtml_legend=1 00:05:57.076 --rc geninfo_all_blocks=1 00:05:57.076 --rc geninfo_unexecuted_blocks=1 00:05:57.076 00:05:57.076 ' 00:05:57.076 23:23:36 event -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:57.076 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:57.076 --rc genhtml_branch_coverage=1 00:05:57.076 --rc genhtml_function_coverage=1 00:05:57.076 --rc genhtml_legend=1 00:05:57.076 --rc geninfo_all_blocks=1 00:05:57.076 --rc geninfo_unexecuted_blocks=1 00:05:57.076 00:05:57.076 ' 00:05:57.076 23:23:36 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:05:57.076 23:23:36 event -- bdev/nbd_common.sh@6 -- # set -e 00:05:57.076 23:23:36 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:57.076 23:23:36 event -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:05:57.076 23:23:36 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:57.076 23:23:36 event -- common/autotest_common.sh@10 -- # set +x 00:05:57.076 ************************************ 00:05:57.076 START TEST event_perf 00:05:57.076 ************************************ 00:05:57.076 23:23:36 event.event_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:57.335 Running I/O for 1 seconds...[2024-09-30 23:23:36.946635] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:05:57.335 [2024-09-30 23:23:36.946797] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70125 ] 00:05:57.335 [2024-09-30 23:23:37.106314] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:57.335 [2024-09-30 23:23:37.155387] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:05:57.335 [2024-09-30 23:23:37.155667] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:05:57.335 Running I/O for 1 seconds...[2024-09-30 23:23:37.155807] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:57.335 [2024-09-30 23:23:37.155934] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:05:58.713 00:05:58.713 lcore 0: 95646 00:05:58.713 lcore 1: 95644 00:05:58.713 lcore 2: 95647 00:05:58.713 lcore 3: 95643 00:05:58.713 done. 00:05:58.713 00:05:58.713 real 0m1.348s 00:05:58.713 user 0m4.118s 00:05:58.713 sys 0m0.111s 00:05:58.713 ************************************ 00:05:58.713 END TEST event_perf 00:05:58.713 ************************************ 00:05:58.713 23:23:38 event.event_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:58.713 23:23:38 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:05:58.713 23:23:38 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:58.713 23:23:38 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:05:58.713 23:23:38 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:58.713 23:23:38 event -- common/autotest_common.sh@10 -- # set +x 00:05:58.713 ************************************ 00:05:58.713 START TEST event_reactor 00:05:58.713 ************************************ 00:05:58.713 23:23:38 event.event_reactor -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:58.713 [2024-09-30 23:23:38.368856] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:05:58.713 [2024-09-30 23:23:38.369088] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70165 ] 00:05:58.713 [2024-09-30 23:23:38.531906] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:58.973 [2024-09-30 23:23:38.575124] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:59.910 test_start 00:05:59.911 oneshot 00:05:59.911 tick 100 00:05:59.911 tick 100 00:05:59.911 tick 250 00:05:59.911 tick 100 00:05:59.911 tick 100 00:05:59.911 tick 100 00:05:59.911 tick 250 00:05:59.911 tick 500 00:05:59.911 tick 100 00:05:59.911 tick 100 00:05:59.911 tick 250 00:05:59.911 tick 100 00:05:59.911 tick 100 00:05:59.911 test_end 00:05:59.911 00:05:59.911 real 0m1.343s 00:05:59.911 user 0m1.138s 00:05:59.911 sys 0m0.098s 00:05:59.911 23:23:39 event.event_reactor -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:59.911 23:23:39 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:05:59.911 ************************************ 00:05:59.911 END TEST event_reactor 00:05:59.911 ************************************ 00:05:59.911 23:23:39 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:59.911 23:23:39 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:05:59.911 23:23:39 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:59.911 23:23:39 event -- common/autotest_common.sh@10 -- # set +x 00:05:59.911 ************************************ 00:05:59.911 START TEST event_reactor_perf 00:05:59.911 ************************************ 00:05:59.911 23:23:39 event.event_reactor_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:00.169 [2024-09-30 23:23:39.778351] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:06:00.170 [2024-09-30 23:23:39.778531] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70196 ] 00:06:00.170 [2024-09-30 23:23:39.937789] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:00.170 [2024-09-30 23:23:39.981352] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:01.549 test_start 00:06:01.549 test_end 00:06:01.549 Performance: 406102 events per second 00:06:01.549 ************************************ 00:06:01.549 END TEST event_reactor_perf 00:06:01.550 ************************************ 00:06:01.550 00:06:01.550 real 0m1.341s 00:06:01.550 user 0m1.129s 00:06:01.550 sys 0m0.104s 00:06:01.550 23:23:41 event.event_reactor_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:01.550 23:23:41 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:06:01.550 23:23:41 event -- event/event.sh@49 -- # uname -s 00:06:01.550 23:23:41 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:06:01.550 23:23:41 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:06:01.550 23:23:41 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:01.550 23:23:41 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:01.550 23:23:41 event -- common/autotest_common.sh@10 -- # set +x 00:06:01.550 ************************************ 00:06:01.550 START TEST event_scheduler 00:06:01.550 ************************************ 00:06:01.550 23:23:41 event.event_scheduler -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:06:01.550 * Looking for test storage... 00:06:01.550 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:06:01.550 23:23:41 event.event_scheduler -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:01.550 23:23:41 event.event_scheduler -- common/autotest_common.sh@1681 -- # lcov --version 00:06:01.550 23:23:41 event.event_scheduler -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:01.550 23:23:41 event.event_scheduler -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:01.550 23:23:41 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:01.550 23:23:41 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:01.550 23:23:41 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:01.550 23:23:41 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:06:01.550 23:23:41 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:06:01.550 23:23:41 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:06:01.550 23:23:41 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:06:01.550 23:23:41 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:06:01.550 23:23:41 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:06:01.550 23:23:41 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:06:01.550 23:23:41 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:01.550 23:23:41 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:06:01.550 23:23:41 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:06:01.550 23:23:41 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:01.550 23:23:41 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:01.550 23:23:41 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:06:01.550 23:23:41 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:06:01.550 23:23:41 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:01.550 23:23:41 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:06:01.550 23:23:41 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:06:01.550 23:23:41 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:06:01.550 23:23:41 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:06:01.550 23:23:41 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:01.550 23:23:41 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:06:01.550 23:23:41 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:06:01.550 23:23:41 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:01.550 23:23:41 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:01.550 23:23:41 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:06:01.550 23:23:41 event.event_scheduler -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:01.550 23:23:41 event.event_scheduler -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:01.550 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:01.550 --rc genhtml_branch_coverage=1 00:06:01.550 --rc genhtml_function_coverage=1 00:06:01.550 --rc genhtml_legend=1 00:06:01.550 --rc geninfo_all_blocks=1 00:06:01.550 --rc geninfo_unexecuted_blocks=1 00:06:01.550 00:06:01.550 ' 00:06:01.550 23:23:41 event.event_scheduler -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:01.550 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:01.550 --rc genhtml_branch_coverage=1 00:06:01.550 --rc genhtml_function_coverage=1 00:06:01.550 --rc genhtml_legend=1 00:06:01.550 --rc geninfo_all_blocks=1 00:06:01.550 --rc geninfo_unexecuted_blocks=1 00:06:01.550 00:06:01.550 ' 00:06:01.550 23:23:41 event.event_scheduler -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:01.550 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:01.550 --rc genhtml_branch_coverage=1 00:06:01.550 --rc genhtml_function_coverage=1 00:06:01.550 --rc genhtml_legend=1 00:06:01.550 --rc geninfo_all_blocks=1 00:06:01.550 --rc geninfo_unexecuted_blocks=1 00:06:01.550 00:06:01.550 ' 00:06:01.550 23:23:41 event.event_scheduler -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:01.550 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:01.550 --rc genhtml_branch_coverage=1 00:06:01.550 --rc genhtml_function_coverage=1 00:06:01.550 --rc genhtml_legend=1 00:06:01.550 --rc geninfo_all_blocks=1 00:06:01.550 --rc geninfo_unexecuted_blocks=1 00:06:01.550 00:06:01.550 ' 00:06:01.550 23:23:41 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:06:01.550 23:23:41 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=70272 00:06:01.550 23:23:41 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:06:01.550 23:23:41 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:06:01.550 23:23:41 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 70272 00:06:01.550 23:23:41 event.event_scheduler -- common/autotest_common.sh@831 -- # '[' -z 70272 ']' 00:06:01.550 23:23:41 event.event_scheduler -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:01.550 23:23:41 event.event_scheduler -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:01.550 23:23:41 event.event_scheduler -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:01.550 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:01.550 23:23:41 event.event_scheduler -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:01.550 23:23:41 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:01.809 [2024-09-30 23:23:41.438994] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:06:01.809 [2024-09-30 23:23:41.439211] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70272 ] 00:06:01.809 [2024-09-30 23:23:41.597121] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:01.809 [2024-09-30 23:23:41.643669] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:01.809 [2024-09-30 23:23:41.643927] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:01.809 [2024-09-30 23:23:41.643958] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:06:01.809 [2024-09-30 23:23:41.644126] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:06:02.747 23:23:42 event.event_scheduler -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:02.747 23:23:42 event.event_scheduler -- common/autotest_common.sh@864 -- # return 0 00:06:02.747 23:23:42 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:06:02.747 23:23:42 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:02.747 23:23:42 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:02.747 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:02.747 POWER: Cannot set governor of lcore 0 to userspace 00:06:02.747 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:02.747 POWER: Cannot set governor of lcore 0 to performance 00:06:02.747 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:02.747 POWER: Cannot set governor of lcore 0 to userspace 00:06:02.747 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:02.747 POWER: Cannot set governor of lcore 0 to userspace 00:06:02.747 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:06:02.747 POWER: Unable to set Power Management Environment for lcore 0 00:06:02.747 [2024-09-30 23:23:42.272784] dpdk_governor.c: 130:_init_core: *ERROR*: Failed to initialize on core0 00:06:02.747 [2024-09-30 23:23:42.272876] dpdk_governor.c: 191:_init: *ERROR*: Failed to initialize on core0 00:06:02.747 [2024-09-30 23:23:42.272920] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:06:02.747 [2024-09-30 23:23:42.272971] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:06:02.747 [2024-09-30 23:23:42.273002] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:06:02.747 [2024-09-30 23:23:42.273066] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:06:02.747 23:23:42 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:02.747 23:23:42 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:06:02.747 23:23:42 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:02.747 23:23:42 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:02.747 [2024-09-30 23:23:42.344197] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:06:02.747 23:23:42 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:02.747 23:23:42 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:06:02.747 23:23:42 event.event_scheduler -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:02.747 23:23:42 event.event_scheduler -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:02.747 23:23:42 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:02.747 ************************************ 00:06:02.747 START TEST scheduler_create_thread 00:06:02.747 ************************************ 00:06:02.747 23:23:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1125 -- # scheduler_create_thread 00:06:02.747 23:23:42 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:06:02.747 23:23:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:02.747 23:23:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:02.747 2 00:06:02.747 23:23:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:02.748 23:23:42 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:06:02.748 23:23:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:02.748 23:23:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:02.748 3 00:06:02.748 23:23:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:02.748 23:23:42 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:06:02.748 23:23:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:02.748 23:23:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:02.748 4 00:06:02.748 23:23:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:02.748 23:23:42 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:06:02.748 23:23:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:02.748 23:23:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:02.748 5 00:06:02.748 23:23:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:02.748 23:23:42 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:06:02.748 23:23:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:02.748 23:23:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:02.748 6 00:06:02.748 23:23:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:02.748 23:23:42 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:06:02.748 23:23:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:02.748 23:23:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:02.748 7 00:06:02.748 23:23:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:02.748 23:23:42 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:06:02.748 23:23:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:02.748 23:23:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:02.748 8 00:06:02.748 23:23:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:02.748 23:23:42 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:06:02.748 23:23:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:02.748 23:23:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:02.748 9 00:06:02.748 23:23:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:02.748 23:23:42 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:06:02.748 23:23:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:02.748 23:23:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:02.748 10 00:06:02.748 23:23:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:02.748 23:23:42 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:06:02.748 23:23:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:02.748 23:23:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:02.748 23:23:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:02.748 23:23:42 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:06:02.748 23:23:42 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:06:02.748 23:23:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:02.748 23:23:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:03.687 23:23:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:03.687 23:23:43 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:06:03.687 23:23:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:03.687 23:23:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:05.066 23:23:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:05.066 23:23:44 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:06:05.066 23:23:44 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:06:05.066 23:23:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:05.066 23:23:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:06.005 ************************************ 00:06:06.005 END TEST scheduler_create_thread 00:06:06.005 ************************************ 00:06:06.005 23:23:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:06.005 00:06:06.005 real 0m3.367s 00:06:06.005 user 0m0.017s 00:06:06.005 sys 0m0.002s 00:06:06.005 23:23:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:06.005 23:23:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:06.005 23:23:45 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:06:06.005 23:23:45 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 70272 00:06:06.005 23:23:45 event.event_scheduler -- common/autotest_common.sh@950 -- # '[' -z 70272 ']' 00:06:06.005 23:23:45 event.event_scheduler -- common/autotest_common.sh@954 -- # kill -0 70272 00:06:06.005 23:23:45 event.event_scheduler -- common/autotest_common.sh@955 -- # uname 00:06:06.005 23:23:45 event.event_scheduler -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:06.005 23:23:45 event.event_scheduler -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70272 00:06:06.005 killing process with pid 70272 00:06:06.005 23:23:45 event.event_scheduler -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:06:06.005 23:23:45 event.event_scheduler -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:06:06.005 23:23:45 event.event_scheduler -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70272' 00:06:06.005 23:23:45 event.event_scheduler -- common/autotest_common.sh@969 -- # kill 70272 00:06:06.005 23:23:45 event.event_scheduler -- common/autotest_common.sh@974 -- # wait 70272 00:06:06.264 [2024-09-30 23:23:46.101089] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:06:06.834 ************************************ 00:06:06.834 END TEST event_scheduler 00:06:06.834 ************************************ 00:06:06.834 00:06:06.834 real 0m5.239s 00:06:06.834 user 0m10.220s 00:06:06.834 sys 0m0.487s 00:06:06.834 23:23:46 event.event_scheduler -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:06.834 23:23:46 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:06.834 23:23:46 event -- event/event.sh@51 -- # modprobe -n nbd 00:06:06.834 23:23:46 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:06:06.834 23:23:46 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:06.834 23:23:46 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:06.834 23:23:46 event -- common/autotest_common.sh@10 -- # set +x 00:06:06.834 ************************************ 00:06:06.834 START TEST app_repeat 00:06:06.834 ************************************ 00:06:06.834 23:23:46 event.app_repeat -- common/autotest_common.sh@1125 -- # app_repeat_test 00:06:06.834 23:23:46 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:06.834 23:23:46 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:06.834 23:23:46 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:06:06.834 23:23:46 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:06.834 23:23:46 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:06:06.834 23:23:46 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:06:06.834 23:23:46 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:06:06.834 23:23:46 event.app_repeat -- event/event.sh@19 -- # repeat_pid=70378 00:06:06.834 23:23:46 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:06:06.834 23:23:46 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:06:06.834 23:23:46 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 70378' 00:06:06.834 Process app_repeat pid: 70378 00:06:06.834 spdk_app_start Round 0 00:06:06.834 23:23:46 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:06.834 23:23:46 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:06:06.834 23:23:46 event.app_repeat -- event/event.sh@25 -- # waitforlisten 70378 /var/tmp/spdk-nbd.sock 00:06:06.834 23:23:46 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 70378 ']' 00:06:06.834 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:06.834 23:23:46 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:06.834 23:23:46 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:06.834 23:23:46 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:06.834 23:23:46 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:06.834 23:23:46 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:06.834 [2024-09-30 23:23:46.513215] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:06:06.834 [2024-09-30 23:23:46.513368] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70378 ] 00:06:06.834 [2024-09-30 23:23:46.677275] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:07.094 [2024-09-30 23:23:46.724592] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:07.094 [2024-09-30 23:23:46.724693] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:07.663 23:23:47 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:07.663 23:23:47 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:06:07.663 23:23:47 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:07.922 Malloc0 00:06:07.922 23:23:47 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:08.182 Malloc1 00:06:08.182 23:23:47 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:08.182 23:23:47 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:08.182 23:23:47 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:08.182 23:23:47 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:08.182 23:23:47 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:08.182 23:23:47 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:08.182 23:23:47 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:08.182 23:23:47 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:08.182 23:23:47 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:08.182 23:23:47 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:08.182 23:23:47 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:08.182 23:23:47 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:08.182 23:23:47 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:08.182 23:23:47 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:08.182 23:23:47 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:08.182 23:23:47 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:08.182 /dev/nbd0 00:06:08.182 23:23:48 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:08.182 23:23:48 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:08.182 23:23:48 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:06:08.182 23:23:48 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:08.182 23:23:48 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:08.182 23:23:48 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:08.182 23:23:48 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:06:08.182 23:23:48 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:08.182 23:23:48 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:08.182 23:23:48 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:08.182 23:23:48 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:08.182 1+0 records in 00:06:08.182 1+0 records out 00:06:08.182 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000607363 s, 6.7 MB/s 00:06:08.182 23:23:48 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:08.442 23:23:48 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:08.442 23:23:48 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:08.442 23:23:48 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:08.442 23:23:48 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:08.442 23:23:48 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:08.442 23:23:48 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:08.442 23:23:48 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:08.442 /dev/nbd1 00:06:08.442 23:23:48 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:08.442 23:23:48 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:08.442 23:23:48 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:06:08.442 23:23:48 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:08.442 23:23:48 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:08.442 23:23:48 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:08.442 23:23:48 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:06:08.442 23:23:48 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:08.442 23:23:48 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:08.442 23:23:48 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:08.442 23:23:48 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:08.442 1+0 records in 00:06:08.442 1+0 records out 00:06:08.442 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000384786 s, 10.6 MB/s 00:06:08.442 23:23:48 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:08.442 23:23:48 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:08.442 23:23:48 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:08.701 23:23:48 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:08.701 23:23:48 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:08.701 23:23:48 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:08.701 23:23:48 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:08.701 23:23:48 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:08.701 23:23:48 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:08.701 23:23:48 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:08.701 23:23:48 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:08.701 { 00:06:08.701 "nbd_device": "/dev/nbd0", 00:06:08.701 "bdev_name": "Malloc0" 00:06:08.701 }, 00:06:08.701 { 00:06:08.701 "nbd_device": "/dev/nbd1", 00:06:08.701 "bdev_name": "Malloc1" 00:06:08.701 } 00:06:08.701 ]' 00:06:08.701 23:23:48 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:08.701 { 00:06:08.701 "nbd_device": "/dev/nbd0", 00:06:08.701 "bdev_name": "Malloc0" 00:06:08.701 }, 00:06:08.701 { 00:06:08.702 "nbd_device": "/dev/nbd1", 00:06:08.702 "bdev_name": "Malloc1" 00:06:08.702 } 00:06:08.702 ]' 00:06:08.702 23:23:48 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:08.961 23:23:48 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:08.961 /dev/nbd1' 00:06:08.961 23:23:48 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:08.961 /dev/nbd1' 00:06:08.961 23:23:48 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:08.961 23:23:48 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:08.961 23:23:48 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:08.961 23:23:48 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:08.961 23:23:48 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:08.961 23:23:48 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:08.961 23:23:48 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:08.961 23:23:48 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:08.961 23:23:48 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:08.961 23:23:48 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:08.961 23:23:48 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:08.961 23:23:48 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:08.961 256+0 records in 00:06:08.961 256+0 records out 00:06:08.961 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0135415 s, 77.4 MB/s 00:06:08.961 23:23:48 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:08.961 23:23:48 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:08.961 256+0 records in 00:06:08.961 256+0 records out 00:06:08.961 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0241459 s, 43.4 MB/s 00:06:08.961 23:23:48 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:08.961 23:23:48 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:08.961 256+0 records in 00:06:08.961 256+0 records out 00:06:08.961 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0262344 s, 40.0 MB/s 00:06:08.961 23:23:48 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:08.961 23:23:48 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:08.961 23:23:48 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:08.961 23:23:48 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:08.961 23:23:48 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:08.961 23:23:48 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:08.961 23:23:48 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:08.961 23:23:48 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:08.961 23:23:48 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:08.961 23:23:48 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:08.961 23:23:48 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:08.961 23:23:48 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:08.961 23:23:48 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:08.961 23:23:48 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:08.961 23:23:48 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:08.961 23:23:48 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:08.961 23:23:48 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:08.961 23:23:48 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:08.961 23:23:48 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:09.221 23:23:48 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:09.221 23:23:48 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:09.221 23:23:48 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:09.221 23:23:48 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:09.221 23:23:48 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:09.221 23:23:48 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:09.221 23:23:48 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:09.221 23:23:48 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:09.221 23:23:48 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:09.221 23:23:48 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:09.479 23:23:49 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:09.479 23:23:49 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:09.479 23:23:49 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:09.479 23:23:49 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:09.479 23:23:49 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:09.479 23:23:49 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:09.479 23:23:49 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:09.479 23:23:49 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:09.479 23:23:49 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:09.479 23:23:49 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:09.479 23:23:49 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:09.479 23:23:49 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:09.479 23:23:49 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:09.479 23:23:49 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:09.738 23:23:49 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:09.738 23:23:49 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:09.738 23:23:49 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:09.738 23:23:49 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:09.738 23:23:49 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:09.738 23:23:49 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:09.738 23:23:49 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:09.738 23:23:49 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:09.738 23:23:49 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:09.738 23:23:49 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:09.738 23:23:49 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:09.997 [2024-09-30 23:23:49.746396] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:09.997 [2024-09-30 23:23:49.789426] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:09.997 [2024-09-30 23:23:49.789429] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.997 [2024-09-30 23:23:49.832288] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:09.997 [2024-09-30 23:23:49.832349] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:13.308 spdk_app_start Round 1 00:06:13.308 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:13.308 23:23:52 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:13.308 23:23:52 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:06:13.308 23:23:52 event.app_repeat -- event/event.sh@25 -- # waitforlisten 70378 /var/tmp/spdk-nbd.sock 00:06:13.308 23:23:52 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 70378 ']' 00:06:13.308 23:23:52 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:13.308 23:23:52 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:13.308 23:23:52 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:13.308 23:23:52 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:13.308 23:23:52 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:13.308 23:23:52 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:13.308 23:23:52 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:06:13.308 23:23:52 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:13.308 Malloc0 00:06:13.308 23:23:53 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:13.567 Malloc1 00:06:13.567 23:23:53 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:13.567 23:23:53 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:13.567 23:23:53 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:13.567 23:23:53 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:13.567 23:23:53 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:13.567 23:23:53 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:13.567 23:23:53 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:13.567 23:23:53 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:13.567 23:23:53 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:13.567 23:23:53 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:13.567 23:23:53 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:13.567 23:23:53 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:13.567 23:23:53 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:13.567 23:23:53 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:13.567 23:23:53 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:13.567 23:23:53 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:13.826 /dev/nbd0 00:06:13.826 23:23:53 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:13.826 23:23:53 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:13.826 23:23:53 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:06:13.826 23:23:53 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:13.826 23:23:53 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:13.826 23:23:53 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:13.826 23:23:53 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:06:13.826 23:23:53 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:13.826 23:23:53 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:13.826 23:23:53 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:13.826 23:23:53 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:13.826 1+0 records in 00:06:13.826 1+0 records out 00:06:13.826 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000308442 s, 13.3 MB/s 00:06:13.826 23:23:53 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:13.826 23:23:53 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:13.826 23:23:53 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:13.826 23:23:53 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:13.826 23:23:53 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:13.826 23:23:53 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:13.826 23:23:53 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:13.826 23:23:53 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:14.085 /dev/nbd1 00:06:14.085 23:23:53 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:14.085 23:23:53 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:14.085 23:23:53 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:06:14.085 23:23:53 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:14.085 23:23:53 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:14.085 23:23:53 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:14.085 23:23:53 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:06:14.085 23:23:53 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:14.085 23:23:53 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:14.085 23:23:53 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:14.085 23:23:53 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:14.085 1+0 records in 00:06:14.085 1+0 records out 00:06:14.086 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000479676 s, 8.5 MB/s 00:06:14.086 23:23:53 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:14.086 23:23:53 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:14.086 23:23:53 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:14.086 23:23:53 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:14.086 23:23:53 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:14.086 23:23:53 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:14.086 23:23:53 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:14.086 23:23:53 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:14.086 23:23:53 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:14.086 23:23:53 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:14.345 23:23:53 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:14.345 { 00:06:14.345 "nbd_device": "/dev/nbd0", 00:06:14.345 "bdev_name": "Malloc0" 00:06:14.345 }, 00:06:14.345 { 00:06:14.345 "nbd_device": "/dev/nbd1", 00:06:14.345 "bdev_name": "Malloc1" 00:06:14.345 } 00:06:14.345 ]' 00:06:14.345 23:23:53 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:14.345 { 00:06:14.345 "nbd_device": "/dev/nbd0", 00:06:14.345 "bdev_name": "Malloc0" 00:06:14.345 }, 00:06:14.345 { 00:06:14.345 "nbd_device": "/dev/nbd1", 00:06:14.345 "bdev_name": "Malloc1" 00:06:14.345 } 00:06:14.345 ]' 00:06:14.345 23:23:53 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:14.345 23:23:53 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:14.345 /dev/nbd1' 00:06:14.345 23:23:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:14.345 /dev/nbd1' 00:06:14.345 23:23:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:14.345 23:23:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:14.345 23:23:53 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:14.345 23:23:53 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:14.345 23:23:53 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:14.345 23:23:53 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:14.345 23:23:53 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:14.345 23:23:53 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:14.345 23:23:53 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:14.345 23:23:53 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:14.345 23:23:53 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:14.345 23:23:53 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:14.345 256+0 records in 00:06:14.345 256+0 records out 00:06:14.345 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0143851 s, 72.9 MB/s 00:06:14.345 23:23:54 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:14.345 23:23:54 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:14.345 256+0 records in 00:06:14.345 256+0 records out 00:06:14.345 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0229926 s, 45.6 MB/s 00:06:14.345 23:23:54 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:14.345 23:23:54 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:14.345 256+0 records in 00:06:14.345 256+0 records out 00:06:14.345 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0318718 s, 32.9 MB/s 00:06:14.345 23:23:54 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:14.345 23:23:54 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:14.345 23:23:54 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:14.345 23:23:54 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:14.345 23:23:54 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:14.345 23:23:54 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:14.345 23:23:54 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:14.345 23:23:54 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:14.345 23:23:54 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:14.345 23:23:54 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:14.345 23:23:54 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:14.345 23:23:54 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:14.345 23:23:54 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:14.346 23:23:54 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:14.346 23:23:54 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:14.346 23:23:54 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:14.346 23:23:54 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:14.346 23:23:54 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:14.346 23:23:54 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:14.605 23:23:54 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:14.605 23:23:54 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:14.605 23:23:54 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:14.605 23:23:54 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:14.605 23:23:54 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:14.605 23:23:54 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:14.605 23:23:54 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:14.605 23:23:54 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:14.605 23:23:54 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:14.605 23:23:54 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:14.864 23:23:54 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:14.864 23:23:54 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:14.864 23:23:54 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:14.864 23:23:54 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:14.864 23:23:54 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:14.864 23:23:54 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:14.864 23:23:54 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:14.864 23:23:54 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:14.864 23:23:54 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:14.864 23:23:54 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:14.864 23:23:54 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:15.123 23:23:54 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:15.123 23:23:54 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:15.123 23:23:54 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:15.123 23:23:54 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:15.123 23:23:54 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:15.123 23:23:54 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:15.123 23:23:54 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:15.123 23:23:54 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:15.123 23:23:54 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:15.123 23:23:54 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:15.123 23:23:54 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:15.123 23:23:54 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:15.124 23:23:54 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:15.383 23:23:55 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:15.383 [2024-09-30 23:23:55.200198] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:15.642 [2024-09-30 23:23:55.245894] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:15.642 [2024-09-30 23:23:55.245945] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:15.642 [2024-09-30 23:23:55.289310] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:15.642 [2024-09-30 23:23:55.289400] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:18.932 spdk_app_start Round 2 00:06:18.932 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:18.932 23:23:58 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:18.932 23:23:58 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:18.932 23:23:58 event.app_repeat -- event/event.sh@25 -- # waitforlisten 70378 /var/tmp/spdk-nbd.sock 00:06:18.932 23:23:58 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 70378 ']' 00:06:18.932 23:23:58 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:18.932 23:23:58 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:18.932 23:23:58 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:18.932 23:23:58 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:18.932 23:23:58 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:18.932 23:23:58 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:18.932 23:23:58 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:06:18.932 23:23:58 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:18.932 Malloc0 00:06:18.932 23:23:58 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:18.932 Malloc1 00:06:18.932 23:23:58 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:18.932 23:23:58 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:18.932 23:23:58 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:18.932 23:23:58 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:18.932 23:23:58 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:18.932 23:23:58 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:18.932 23:23:58 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:18.932 23:23:58 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:18.932 23:23:58 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:18.932 23:23:58 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:18.932 23:23:58 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:18.932 23:23:58 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:18.932 23:23:58 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:18.932 23:23:58 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:18.932 23:23:58 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:18.932 23:23:58 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:19.192 /dev/nbd0 00:06:19.192 23:23:58 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:19.192 23:23:58 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:19.192 23:23:58 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:06:19.192 23:23:58 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:19.192 23:23:58 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:19.192 23:23:58 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:19.192 23:23:58 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:06:19.192 23:23:58 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:19.192 23:23:58 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:19.192 23:23:58 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:19.192 23:23:58 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:19.192 1+0 records in 00:06:19.192 1+0 records out 00:06:19.192 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000413724 s, 9.9 MB/s 00:06:19.192 23:23:58 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:19.192 23:23:58 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:19.192 23:23:58 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:19.192 23:23:58 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:19.192 23:23:58 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:19.192 23:23:58 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:19.192 23:23:58 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:19.192 23:23:58 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:19.451 /dev/nbd1 00:06:19.451 23:23:59 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:19.451 23:23:59 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:19.451 23:23:59 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:06:19.451 23:23:59 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:19.451 23:23:59 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:19.451 23:23:59 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:19.451 23:23:59 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:06:19.451 23:23:59 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:19.451 23:23:59 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:19.451 23:23:59 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:19.451 23:23:59 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:19.451 1+0 records in 00:06:19.451 1+0 records out 00:06:19.451 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000437288 s, 9.4 MB/s 00:06:19.451 23:23:59 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:19.451 23:23:59 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:19.451 23:23:59 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:19.451 23:23:59 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:19.451 23:23:59 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:19.451 23:23:59 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:19.451 23:23:59 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:19.451 23:23:59 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:19.451 23:23:59 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:19.451 23:23:59 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:19.711 23:23:59 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:19.711 { 00:06:19.711 "nbd_device": "/dev/nbd0", 00:06:19.711 "bdev_name": "Malloc0" 00:06:19.711 }, 00:06:19.711 { 00:06:19.711 "nbd_device": "/dev/nbd1", 00:06:19.711 "bdev_name": "Malloc1" 00:06:19.711 } 00:06:19.711 ]' 00:06:19.711 23:23:59 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:19.711 { 00:06:19.711 "nbd_device": "/dev/nbd0", 00:06:19.711 "bdev_name": "Malloc0" 00:06:19.711 }, 00:06:19.711 { 00:06:19.711 "nbd_device": "/dev/nbd1", 00:06:19.711 "bdev_name": "Malloc1" 00:06:19.711 } 00:06:19.711 ]' 00:06:19.711 23:23:59 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:19.711 23:23:59 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:19.711 /dev/nbd1' 00:06:19.711 23:23:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:19.711 /dev/nbd1' 00:06:19.711 23:23:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:19.711 23:23:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:19.711 23:23:59 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:19.711 23:23:59 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:19.711 23:23:59 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:19.711 23:23:59 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:19.711 23:23:59 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:19.711 23:23:59 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:19.711 23:23:59 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:19.711 23:23:59 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:19.711 23:23:59 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:19.711 23:23:59 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:19.711 256+0 records in 00:06:19.711 256+0 records out 00:06:19.711 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0135499 s, 77.4 MB/s 00:06:19.711 23:23:59 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:19.711 23:23:59 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:19.711 256+0 records in 00:06:19.711 256+0 records out 00:06:19.711 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0243884 s, 43.0 MB/s 00:06:19.711 23:23:59 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:19.711 23:23:59 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:19.711 256+0 records in 00:06:19.711 256+0 records out 00:06:19.711 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0253733 s, 41.3 MB/s 00:06:19.711 23:23:59 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:19.711 23:23:59 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:19.711 23:23:59 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:19.711 23:23:59 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:19.711 23:23:59 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:19.711 23:23:59 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:19.711 23:23:59 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:19.711 23:23:59 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:19.711 23:23:59 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:19.711 23:23:59 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:19.711 23:23:59 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:19.970 23:23:59 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:19.970 23:23:59 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:19.970 23:23:59 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:19.970 23:23:59 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:19.970 23:23:59 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:19.970 23:23:59 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:19.970 23:23:59 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:19.970 23:23:59 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:19.971 23:23:59 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:19.971 23:23:59 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:19.971 23:23:59 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:19.971 23:23:59 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:19.971 23:23:59 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:19.971 23:23:59 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:19.971 23:23:59 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:19.971 23:23:59 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:19.971 23:23:59 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:19.971 23:23:59 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:20.230 23:23:59 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:20.230 23:23:59 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:20.230 23:23:59 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:20.230 23:23:59 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:20.230 23:23:59 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:20.230 23:23:59 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:20.230 23:23:59 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:20.230 23:23:59 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:20.230 23:23:59 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:20.230 23:23:59 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:20.230 23:23:59 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:20.489 23:24:00 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:20.489 23:24:00 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:20.489 23:24:00 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:20.489 23:24:00 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:20.489 23:24:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:20.489 23:24:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:20.489 23:24:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:20.489 23:24:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:20.489 23:24:00 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:20.489 23:24:00 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:20.489 23:24:00 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:20.489 23:24:00 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:20.489 23:24:00 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:20.748 23:24:00 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:21.007 [2024-09-30 23:24:00.647785] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:21.007 [2024-09-30 23:24:00.695471] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:21.007 [2024-09-30 23:24:00.695480] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:21.007 [2024-09-30 23:24:00.738710] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:21.007 [2024-09-30 23:24:00.738772] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:24.296 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:24.296 23:24:03 event.app_repeat -- event/event.sh@38 -- # waitforlisten 70378 /var/tmp/spdk-nbd.sock 00:06:24.296 23:24:03 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 70378 ']' 00:06:24.296 23:24:03 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:24.296 23:24:03 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:24.296 23:24:03 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:24.296 23:24:03 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:24.296 23:24:03 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:24.296 23:24:03 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:24.296 23:24:03 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:06:24.296 23:24:03 event.app_repeat -- event/event.sh@39 -- # killprocess 70378 00:06:24.296 23:24:03 event.app_repeat -- common/autotest_common.sh@950 -- # '[' -z 70378 ']' 00:06:24.296 23:24:03 event.app_repeat -- common/autotest_common.sh@954 -- # kill -0 70378 00:06:24.296 23:24:03 event.app_repeat -- common/autotest_common.sh@955 -- # uname 00:06:24.296 23:24:03 event.app_repeat -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:24.296 23:24:03 event.app_repeat -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70378 00:06:24.296 23:24:03 event.app_repeat -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:24.296 23:24:03 event.app_repeat -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:24.296 23:24:03 event.app_repeat -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70378' 00:06:24.296 killing process with pid 70378 00:06:24.296 23:24:03 event.app_repeat -- common/autotest_common.sh@969 -- # kill 70378 00:06:24.296 23:24:03 event.app_repeat -- common/autotest_common.sh@974 -- # wait 70378 00:06:24.296 spdk_app_start is called in Round 0. 00:06:24.296 Shutdown signal received, stop current app iteration 00:06:24.296 Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 reinitialization... 00:06:24.296 spdk_app_start is called in Round 1. 00:06:24.296 Shutdown signal received, stop current app iteration 00:06:24.296 Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 reinitialization... 00:06:24.296 spdk_app_start is called in Round 2. 00:06:24.296 Shutdown signal received, stop current app iteration 00:06:24.296 Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 reinitialization... 00:06:24.296 spdk_app_start is called in Round 3. 00:06:24.297 Shutdown signal received, stop current app iteration 00:06:24.297 23:24:03 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:24.297 23:24:03 event.app_repeat -- event/event.sh@42 -- # return 0 00:06:24.297 00:06:24.297 real 0m17.486s 00:06:24.297 user 0m38.689s 00:06:24.297 sys 0m2.423s 00:06:24.297 23:24:03 event.app_repeat -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:24.297 23:24:03 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:24.297 ************************************ 00:06:24.297 END TEST app_repeat 00:06:24.297 ************************************ 00:06:24.297 23:24:03 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:24.297 23:24:03 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:24.297 23:24:03 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:24.297 23:24:03 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:24.297 23:24:03 event -- common/autotest_common.sh@10 -- # set +x 00:06:24.297 ************************************ 00:06:24.297 START TEST cpu_locks 00:06:24.297 ************************************ 00:06:24.297 23:24:04 event.cpu_locks -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:24.297 * Looking for test storage... 00:06:24.297 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:06:24.297 23:24:04 event.cpu_locks -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:24.297 23:24:04 event.cpu_locks -- common/autotest_common.sh@1681 -- # lcov --version 00:06:24.297 23:24:04 event.cpu_locks -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:24.557 23:24:04 event.cpu_locks -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:24.557 23:24:04 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:24.557 23:24:04 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:24.557 23:24:04 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:24.557 23:24:04 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:06:24.557 23:24:04 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:06:24.557 23:24:04 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:06:24.557 23:24:04 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:06:24.557 23:24:04 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:06:24.557 23:24:04 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:06:24.557 23:24:04 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:06:24.557 23:24:04 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:24.557 23:24:04 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:06:24.557 23:24:04 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:06:24.557 23:24:04 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:24.557 23:24:04 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:24.557 23:24:04 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:06:24.557 23:24:04 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:06:24.557 23:24:04 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:24.557 23:24:04 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:06:24.557 23:24:04 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:06:24.557 23:24:04 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:06:24.557 23:24:04 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:06:24.557 23:24:04 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:24.557 23:24:04 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:06:24.557 23:24:04 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:06:24.557 23:24:04 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:24.557 23:24:04 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:24.557 23:24:04 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:06:24.557 23:24:04 event.cpu_locks -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:24.557 23:24:04 event.cpu_locks -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:24.557 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:24.557 --rc genhtml_branch_coverage=1 00:06:24.557 --rc genhtml_function_coverage=1 00:06:24.557 --rc genhtml_legend=1 00:06:24.557 --rc geninfo_all_blocks=1 00:06:24.557 --rc geninfo_unexecuted_blocks=1 00:06:24.557 00:06:24.557 ' 00:06:24.557 23:24:04 event.cpu_locks -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:24.557 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:24.557 --rc genhtml_branch_coverage=1 00:06:24.557 --rc genhtml_function_coverage=1 00:06:24.557 --rc genhtml_legend=1 00:06:24.557 --rc geninfo_all_blocks=1 00:06:24.557 --rc geninfo_unexecuted_blocks=1 00:06:24.557 00:06:24.557 ' 00:06:24.557 23:24:04 event.cpu_locks -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:24.557 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:24.557 --rc genhtml_branch_coverage=1 00:06:24.557 --rc genhtml_function_coverage=1 00:06:24.557 --rc genhtml_legend=1 00:06:24.557 --rc geninfo_all_blocks=1 00:06:24.557 --rc geninfo_unexecuted_blocks=1 00:06:24.557 00:06:24.557 ' 00:06:24.557 23:24:04 event.cpu_locks -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:24.557 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:24.557 --rc genhtml_branch_coverage=1 00:06:24.557 --rc genhtml_function_coverage=1 00:06:24.557 --rc genhtml_legend=1 00:06:24.557 --rc geninfo_all_blocks=1 00:06:24.557 --rc geninfo_unexecuted_blocks=1 00:06:24.557 00:06:24.557 ' 00:06:24.557 23:24:04 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:24.557 23:24:04 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:24.557 23:24:04 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:24.557 23:24:04 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:24.557 23:24:04 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:24.557 23:24:04 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:24.557 23:24:04 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:24.557 ************************************ 00:06:24.557 START TEST default_locks 00:06:24.557 ************************************ 00:06:24.557 23:24:04 event.cpu_locks.default_locks -- common/autotest_common.sh@1125 -- # default_locks 00:06:24.557 23:24:04 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=70798 00:06:24.557 23:24:04 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:24.557 23:24:04 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 70798 00:06:24.557 23:24:04 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 70798 ']' 00:06:24.557 23:24:04 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:24.557 23:24:04 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:24.557 23:24:04 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:24.557 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:24.557 23:24:04 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:24.557 23:24:04 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:24.557 [2024-09-30 23:24:04.347863] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:06:24.557 [2024-09-30 23:24:04.348054] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70798 ] 00:06:24.817 [2024-09-30 23:24:04.506366] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:24.817 [2024-09-30 23:24:04.552395] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:25.386 23:24:05 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:25.386 23:24:05 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 0 00:06:25.386 23:24:05 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 70798 00:06:25.386 23:24:05 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 70798 00:06:25.386 23:24:05 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:25.956 23:24:05 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 70798 00:06:25.956 23:24:05 event.cpu_locks.default_locks -- common/autotest_common.sh@950 -- # '[' -z 70798 ']' 00:06:25.956 23:24:05 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # kill -0 70798 00:06:25.956 23:24:05 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # uname 00:06:25.956 23:24:05 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:25.956 23:24:05 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70798 00:06:25.956 killing process with pid 70798 00:06:25.956 23:24:05 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:25.956 23:24:05 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:25.956 23:24:05 event.cpu_locks.default_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70798' 00:06:25.956 23:24:05 event.cpu_locks.default_locks -- common/autotest_common.sh@969 -- # kill 70798 00:06:25.956 23:24:05 event.cpu_locks.default_locks -- common/autotest_common.sh@974 -- # wait 70798 00:06:26.525 23:24:06 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 70798 00:06:26.525 23:24:06 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # local es=0 00:06:26.525 23:24:06 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 70798 00:06:26.525 23:24:06 event.cpu_locks.default_locks -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:26.525 23:24:06 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:26.525 23:24:06 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:26.525 23:24:06 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:26.525 23:24:06 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # waitforlisten 70798 00:06:26.525 23:24:06 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 70798 ']' 00:06:26.525 23:24:06 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:26.525 23:24:06 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:26.525 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:26.525 23:24:06 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:26.525 23:24:06 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:26.525 23:24:06 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:26.525 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (70798) - No such process 00:06:26.525 ERROR: process (pid: 70798) is no longer running 00:06:26.525 23:24:06 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:26.525 23:24:06 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 1 00:06:26.525 23:24:06 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # es=1 00:06:26.525 23:24:06 event.cpu_locks.default_locks -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:26.525 23:24:06 event.cpu_locks.default_locks -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:26.525 23:24:06 event.cpu_locks.default_locks -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:26.525 23:24:06 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:06:26.525 23:24:06 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:26.525 23:24:06 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:06:26.525 23:24:06 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:26.525 00:06:26.525 real 0m2.028s 00:06:26.525 user 0m1.969s 00:06:26.525 sys 0m0.640s 00:06:26.525 23:24:06 event.cpu_locks.default_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:26.525 23:24:06 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:26.525 ************************************ 00:06:26.525 END TEST default_locks 00:06:26.525 ************************************ 00:06:26.525 23:24:06 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:26.525 23:24:06 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:26.525 23:24:06 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:26.525 23:24:06 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:26.525 ************************************ 00:06:26.525 START TEST default_locks_via_rpc 00:06:26.525 ************************************ 00:06:26.525 23:24:06 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1125 -- # default_locks_via_rpc 00:06:26.525 23:24:06 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=70851 00:06:26.525 23:24:06 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:26.525 23:24:06 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 70851 00:06:26.525 23:24:06 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 70851 ']' 00:06:26.525 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:26.525 23:24:06 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:26.525 23:24:06 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:26.525 23:24:06 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:26.525 23:24:06 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:26.525 23:24:06 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:26.784 [2024-09-30 23:24:06.444135] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:06:26.784 [2024-09-30 23:24:06.444318] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70851 ] 00:06:26.784 [2024-09-30 23:24:06.605260] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:27.044 [2024-09-30 23:24:06.673336] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:27.612 23:24:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:27.612 23:24:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:27.612 23:24:07 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:27.612 23:24:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:27.612 23:24:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:27.612 23:24:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:27.612 23:24:07 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:06:27.612 23:24:07 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:27.612 23:24:07 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:06:27.612 23:24:07 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:27.612 23:24:07 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:27.612 23:24:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:27.612 23:24:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:27.612 23:24:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:27.612 23:24:07 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 70851 00:06:27.612 23:24:07 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:27.612 23:24:07 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 70851 00:06:27.872 23:24:07 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 70851 00:06:27.872 23:24:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@950 -- # '[' -z 70851 ']' 00:06:27.872 23:24:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # kill -0 70851 00:06:27.872 23:24:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # uname 00:06:27.872 23:24:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:27.872 23:24:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70851 00:06:27.872 killing process with pid 70851 00:06:27.872 23:24:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:27.873 23:24:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:27.873 23:24:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70851' 00:06:27.873 23:24:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@969 -- # kill 70851 00:06:27.873 23:24:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@974 -- # wait 70851 00:06:28.442 ************************************ 00:06:28.442 END TEST default_locks_via_rpc 00:06:28.442 ************************************ 00:06:28.442 00:06:28.442 real 0m1.821s 00:06:28.442 user 0m1.609s 00:06:28.442 sys 0m0.690s 00:06:28.442 23:24:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:28.442 23:24:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:28.442 23:24:08 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:28.442 23:24:08 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:28.442 23:24:08 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:28.442 23:24:08 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:28.442 ************************************ 00:06:28.442 START TEST non_locking_app_on_locked_coremask 00:06:28.442 ************************************ 00:06:28.442 23:24:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # non_locking_app_on_locked_coremask 00:06:28.442 23:24:08 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=70903 00:06:28.442 23:24:08 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:28.442 23:24:08 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 70903 /var/tmp/spdk.sock 00:06:28.442 23:24:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 70903 ']' 00:06:28.442 23:24:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:28.442 23:24:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:28.442 23:24:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:28.442 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:28.442 23:24:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:28.442 23:24:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:28.701 [2024-09-30 23:24:08.344464] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:06:28.701 [2024-09-30 23:24:08.344613] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70903 ] 00:06:28.701 [2024-09-30 23:24:08.504395] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:28.701 [2024-09-30 23:24:08.550791] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.663 23:24:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:29.663 23:24:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:29.663 23:24:09 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:29.663 23:24:09 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=70919 00:06:29.663 23:24:09 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 70919 /var/tmp/spdk2.sock 00:06:29.663 23:24:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 70919 ']' 00:06:29.663 23:24:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:29.663 23:24:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:29.663 23:24:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:29.663 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:29.663 23:24:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:29.663 23:24:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:29.663 [2024-09-30 23:24:09.246596] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:06:29.663 [2024-09-30 23:24:09.246869] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70919 ] 00:06:29.663 [2024-09-30 23:24:09.400547] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:29.663 [2024-09-30 23:24:09.400607] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:29.663 [2024-09-30 23:24:09.488037] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:30.235 23:24:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:30.235 23:24:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:30.235 23:24:10 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 70903 00:06:30.235 23:24:10 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 70903 00:06:30.235 23:24:10 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:31.171 23:24:10 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 70903 00:06:31.171 23:24:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 70903 ']' 00:06:31.171 23:24:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 70903 00:06:31.171 23:24:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:31.171 23:24:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:31.171 23:24:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70903 00:06:31.171 killing process with pid 70903 00:06:31.171 23:24:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:31.171 23:24:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:31.171 23:24:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70903' 00:06:31.171 23:24:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 70903 00:06:31.171 23:24:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 70903 00:06:32.110 23:24:11 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 70919 00:06:32.110 23:24:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 70919 ']' 00:06:32.110 23:24:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 70919 00:06:32.110 23:24:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:32.110 23:24:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:32.110 23:24:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70919 00:06:32.110 killing process with pid 70919 00:06:32.110 23:24:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:32.110 23:24:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:32.110 23:24:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70919' 00:06:32.110 23:24:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 70919 00:06:32.110 23:24:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 70919 00:06:32.369 ************************************ 00:06:32.369 END TEST non_locking_app_on_locked_coremask 00:06:32.369 ************************************ 00:06:32.369 00:06:32.369 real 0m3.816s 00:06:32.369 user 0m3.953s 00:06:32.369 sys 0m1.233s 00:06:32.369 23:24:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:32.369 23:24:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:32.369 23:24:12 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:32.369 23:24:12 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:32.369 23:24:12 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:32.369 23:24:12 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:32.369 ************************************ 00:06:32.369 START TEST locking_app_on_unlocked_coremask 00:06:32.369 ************************************ 00:06:32.369 23:24:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_unlocked_coremask 00:06:32.369 23:24:12 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=70990 00:06:32.369 23:24:12 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:32.369 23:24:12 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 70990 /var/tmp/spdk.sock 00:06:32.369 23:24:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 70990 ']' 00:06:32.369 23:24:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:32.369 23:24:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:32.369 23:24:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:32.369 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:32.369 23:24:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:32.369 23:24:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:32.629 [2024-09-30 23:24:12.228402] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:06:32.629 [2024-09-30 23:24:12.228997] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70990 ] 00:06:32.629 [2024-09-30 23:24:12.385975] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:32.629 [2024-09-30 23:24:12.386145] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:32.629 [2024-09-30 23:24:12.429147] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:33.197 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:33.197 23:24:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:33.197 23:24:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:33.197 23:24:13 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:33.197 23:24:13 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=71006 00:06:33.197 23:24:13 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 71006 /var/tmp/spdk2.sock 00:06:33.197 23:24:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 71006 ']' 00:06:33.197 23:24:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:33.197 23:24:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:33.197 23:24:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:33.197 23:24:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:33.197 23:24:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:33.455 [2024-09-30 23:24:13.095499] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:06:33.455 [2024-09-30 23:24:13.095697] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71006 ] 00:06:33.455 [2024-09-30 23:24:13.245990] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:33.713 [2024-09-30 23:24:13.330439] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.314 23:24:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:34.314 23:24:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:34.314 23:24:13 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 71006 00:06:34.314 23:24:13 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 71006 00:06:34.314 23:24:13 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:34.572 23:24:14 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 70990 00:06:34.572 23:24:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 70990 ']' 00:06:34.572 23:24:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 70990 00:06:34.572 23:24:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:34.572 23:24:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:34.572 23:24:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70990 00:06:34.830 killing process with pid 70990 00:06:34.830 23:24:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:34.830 23:24:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:34.830 23:24:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70990' 00:06:34.830 23:24:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 70990 00:06:34.830 23:24:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 70990 00:06:35.396 23:24:15 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 71006 00:06:35.396 23:24:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 71006 ']' 00:06:35.396 23:24:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 71006 00:06:35.396 23:24:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:35.396 23:24:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:35.396 23:24:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71006 00:06:35.396 killing process with pid 71006 00:06:35.396 23:24:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:35.396 23:24:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:35.396 23:24:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71006' 00:06:35.396 23:24:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 71006 00:06:35.396 23:24:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 71006 00:06:35.965 00:06:35.965 real 0m3.469s 00:06:35.965 user 0m3.594s 00:06:35.965 sys 0m1.068s 00:06:35.965 ************************************ 00:06:35.965 END TEST locking_app_on_unlocked_coremask 00:06:35.965 ************************************ 00:06:35.965 23:24:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:35.965 23:24:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:35.965 23:24:15 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:35.965 23:24:15 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:35.965 23:24:15 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:35.965 23:24:15 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:35.965 ************************************ 00:06:35.965 START TEST locking_app_on_locked_coremask 00:06:35.965 ************************************ 00:06:35.965 23:24:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_locked_coremask 00:06:35.965 23:24:15 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=71064 00:06:35.965 23:24:15 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:35.965 23:24:15 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 71064 /var/tmp/spdk.sock 00:06:35.965 23:24:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 71064 ']' 00:06:35.965 23:24:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:35.965 23:24:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:35.965 23:24:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:35.965 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:35.965 23:24:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:35.965 23:24:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:35.965 [2024-09-30 23:24:15.761460] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:06:35.965 [2024-09-30 23:24:15.761670] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71064 ] 00:06:36.226 [2024-09-30 23:24:15.922808] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:36.226 [2024-09-30 23:24:15.969374] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:36.796 23:24:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:36.796 23:24:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:36.796 23:24:16 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=71080 00:06:36.796 23:24:16 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 71080 /var/tmp/spdk2.sock 00:06:36.796 23:24:16 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:36.796 23:24:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # local es=0 00:06:36.796 23:24:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 71080 /var/tmp/spdk2.sock 00:06:36.796 23:24:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:36.796 23:24:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:36.796 23:24:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:36.796 23:24:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:36.796 23:24:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # waitforlisten 71080 /var/tmp/spdk2.sock 00:06:36.796 23:24:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 71080 ']' 00:06:36.796 23:24:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:36.796 23:24:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:36.796 23:24:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:36.796 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:36.796 23:24:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:36.796 23:24:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:37.055 [2024-09-30 23:24:16.651904] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:06:37.055 [2024-09-30 23:24:16.652439] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71080 ] 00:06:37.055 [2024-09-30 23:24:16.802286] app.c: 779:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 71064 has claimed it. 00:06:37.055 [2024-09-30 23:24:16.802354] app.c: 910:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:37.622 ERROR: process (pid: 71080) is no longer running 00:06:37.622 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (71080) - No such process 00:06:37.622 23:24:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:37.622 23:24:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 1 00:06:37.622 23:24:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # es=1 00:06:37.622 23:24:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:37.622 23:24:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:37.622 23:24:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:37.622 23:24:17 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 71064 00:06:37.622 23:24:17 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 71064 00:06:37.622 23:24:17 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:37.882 23:24:17 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 71064 00:06:37.882 23:24:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 71064 ']' 00:06:37.882 23:24:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 71064 00:06:37.882 23:24:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:37.882 23:24:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:37.882 23:24:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71064 00:06:37.882 killing process with pid 71064 00:06:37.882 23:24:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:37.882 23:24:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:37.882 23:24:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71064' 00:06:37.882 23:24:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 71064 00:06:37.882 23:24:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 71064 00:06:38.450 ************************************ 00:06:38.450 END TEST locking_app_on_locked_coremask 00:06:38.450 ************************************ 00:06:38.450 00:06:38.450 real 0m2.423s 00:06:38.450 user 0m2.568s 00:06:38.450 sys 0m0.749s 00:06:38.450 23:24:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:38.450 23:24:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:38.450 23:24:18 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:38.450 23:24:18 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:38.450 23:24:18 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:38.450 23:24:18 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:38.450 ************************************ 00:06:38.450 START TEST locking_overlapped_coremask 00:06:38.450 ************************************ 00:06:38.450 23:24:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask 00:06:38.450 23:24:18 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=71133 00:06:38.450 23:24:18 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:06:38.450 23:24:18 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 71133 /var/tmp/spdk.sock 00:06:38.450 23:24:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 71133 ']' 00:06:38.450 23:24:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:38.450 23:24:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:38.450 23:24:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:38.450 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:38.450 23:24:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:38.451 23:24:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:38.451 [2024-09-30 23:24:18.251010] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:06:38.451 [2024-09-30 23:24:18.251254] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71133 ] 00:06:38.710 [2024-09-30 23:24:18.411811] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:38.710 [2024-09-30 23:24:18.461660] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:38.710 [2024-09-30 23:24:18.461767] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:38.710 [2024-09-30 23:24:18.461908] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:06:39.279 23:24:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:39.279 23:24:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:39.279 23:24:19 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=71140 00:06:39.279 23:24:19 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:39.279 23:24:19 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 71140 /var/tmp/spdk2.sock 00:06:39.279 23:24:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # local es=0 00:06:39.279 23:24:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 71140 /var/tmp/spdk2.sock 00:06:39.279 23:24:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:39.279 23:24:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:39.279 23:24:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:39.279 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:39.279 23:24:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:39.279 23:24:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # waitforlisten 71140 /var/tmp/spdk2.sock 00:06:39.279 23:24:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 71140 ']' 00:06:39.279 23:24:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:39.279 23:24:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:39.279 23:24:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:39.279 23:24:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:39.279 23:24:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:39.538 [2024-09-30 23:24:19.142968] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:06:39.538 [2024-09-30 23:24:19.143139] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71140 ] 00:06:39.538 [2024-09-30 23:24:19.294751] app.c: 779:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 71133 has claimed it. 00:06:39.538 [2024-09-30 23:24:19.294832] app.c: 910:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:40.106 ERROR: process (pid: 71140) is no longer running 00:06:40.106 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (71140) - No such process 00:06:40.106 23:24:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:40.106 23:24:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 1 00:06:40.106 23:24:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # es=1 00:06:40.106 23:24:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:40.106 23:24:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:40.106 23:24:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:40.106 23:24:19 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:40.106 23:24:19 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:40.106 23:24:19 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:40.106 23:24:19 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:40.106 23:24:19 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 71133 00:06:40.106 23:24:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@950 -- # '[' -z 71133 ']' 00:06:40.106 23:24:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # kill -0 71133 00:06:40.106 23:24:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # uname 00:06:40.106 23:24:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:40.106 23:24:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71133 00:06:40.106 23:24:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:40.106 23:24:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:40.106 23:24:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71133' 00:06:40.106 killing process with pid 71133 00:06:40.106 23:24:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@969 -- # kill 71133 00:06:40.106 23:24:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@974 -- # wait 71133 00:06:40.673 00:06:40.673 real 0m2.062s 00:06:40.673 user 0m5.434s 00:06:40.673 sys 0m0.511s 00:06:40.673 23:24:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:40.673 23:24:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:40.673 ************************************ 00:06:40.673 END TEST locking_overlapped_coremask 00:06:40.673 ************************************ 00:06:40.673 23:24:20 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:40.673 23:24:20 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:40.673 23:24:20 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:40.673 23:24:20 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:40.673 ************************************ 00:06:40.673 START TEST locking_overlapped_coremask_via_rpc 00:06:40.673 ************************************ 00:06:40.673 23:24:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask_via_rpc 00:06:40.673 23:24:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=71193 00:06:40.673 23:24:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:40.673 23:24:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 71193 /var/tmp/spdk.sock 00:06:40.673 23:24:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 71193 ']' 00:06:40.673 23:24:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:40.673 23:24:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:40.673 23:24:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:40.673 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:40.673 23:24:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:40.673 23:24:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:40.673 [2024-09-30 23:24:20.383838] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:06:40.673 [2024-09-30 23:24:20.384044] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71193 ] 00:06:40.932 [2024-09-30 23:24:20.544671] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:40.932 [2024-09-30 23:24:20.544822] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:40.932 [2024-09-30 23:24:20.591836] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:40.932 [2024-09-30 23:24:20.592023] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:40.932 [2024-09-30 23:24:20.592104] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:06:41.501 23:24:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:41.501 23:24:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:41.501 23:24:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=71210 00:06:41.501 23:24:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:41.501 23:24:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 71210 /var/tmp/spdk2.sock 00:06:41.501 23:24:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 71210 ']' 00:06:41.501 23:24:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:41.501 23:24:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:41.501 23:24:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:41.501 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:41.501 23:24:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:41.501 23:24:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:41.501 [2024-09-30 23:24:21.295346] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:06:41.501 [2024-09-30 23:24:21.295561] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71210 ] 00:06:41.760 [2024-09-30 23:24:21.451337] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:41.760 [2024-09-30 23:24:21.451414] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:41.760 [2024-09-30 23:24:21.609396] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:06:41.760 [2024-09-30 23:24:21.609493] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:06:41.761 [2024-09-30 23:24:21.609619] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:06:42.697 23:24:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:42.698 23:24:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:42.698 23:24:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:42.698 23:24:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:42.698 23:24:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:42.698 23:24:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:42.698 23:24:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:42.698 23:24:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # local es=0 00:06:42.698 23:24:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:42.698 23:24:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:06:42.698 23:24:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:42.698 23:24:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:06:42.698 23:24:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:42.698 23:24:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:42.698 23:24:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:42.698 23:24:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:42.698 [2024-09-30 23:24:22.305071] app.c: 779:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 71193 has claimed it. 00:06:42.698 request: 00:06:42.698 { 00:06:42.698 "method": "framework_enable_cpumask_locks", 00:06:42.698 "req_id": 1 00:06:42.698 } 00:06:42.698 Got JSON-RPC error response 00:06:42.698 response: 00:06:42.698 { 00:06:42.698 "code": -32603, 00:06:42.698 "message": "Failed to claim CPU core: 2" 00:06:42.698 } 00:06:42.698 23:24:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:06:42.698 23:24:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # es=1 00:06:42.698 23:24:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:42.698 23:24:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:42.698 23:24:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:42.698 23:24:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 71193 /var/tmp/spdk.sock 00:06:42.698 23:24:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 71193 ']' 00:06:42.698 23:24:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:42.698 23:24:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:42.698 23:24:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:42.698 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:42.698 23:24:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:42.698 23:24:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:42.698 23:24:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:42.698 23:24:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:42.698 23:24:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 71210 /var/tmp/spdk2.sock 00:06:42.698 23:24:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 71210 ']' 00:06:42.698 23:24:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:42.698 23:24:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:42.698 23:24:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:42.698 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:42.698 23:24:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:42.698 23:24:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:42.958 23:24:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:42.958 23:24:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:42.958 23:24:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:42.958 23:24:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:42.958 23:24:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:42.958 23:24:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:42.958 00:06:42.958 real 0m2.433s 00:06:42.958 user 0m1.030s 00:06:42.958 sys 0m0.172s 00:06:42.958 23:24:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:42.958 23:24:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:42.958 ************************************ 00:06:42.958 END TEST locking_overlapped_coremask_via_rpc 00:06:42.958 ************************************ 00:06:42.958 23:24:22 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:42.958 23:24:22 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 71193 ]] 00:06:42.958 23:24:22 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 71193 00:06:42.958 23:24:22 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 71193 ']' 00:06:42.958 23:24:22 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 71193 00:06:42.958 23:24:22 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:06:42.958 23:24:22 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:42.958 23:24:22 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71193 00:06:43.217 23:24:22 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:43.217 23:24:22 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:43.217 killing process with pid 71193 00:06:43.217 23:24:22 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71193' 00:06:43.217 23:24:22 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 71193 00:06:43.217 23:24:22 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 71193 00:06:43.476 23:24:23 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 71210 ]] 00:06:43.476 23:24:23 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 71210 00:06:43.476 23:24:23 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 71210 ']' 00:06:43.476 23:24:23 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 71210 00:06:43.476 23:24:23 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:06:43.476 23:24:23 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:43.476 23:24:23 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71210 00:06:43.476 killing process with pid 71210 00:06:43.476 23:24:23 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:06:43.476 23:24:23 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:06:43.476 23:24:23 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71210' 00:06:43.476 23:24:23 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 71210 00:06:43.476 23:24:23 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 71210 00:06:44.417 23:24:23 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:44.417 Process with pid 71193 is not found 00:06:44.417 Process with pid 71210 is not found 00:06:44.417 23:24:23 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:44.417 23:24:23 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 71193 ]] 00:06:44.417 23:24:23 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 71193 00:06:44.417 23:24:23 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 71193 ']' 00:06:44.417 23:24:23 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 71193 00:06:44.417 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (71193) - No such process 00:06:44.417 23:24:23 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 71193 is not found' 00:06:44.417 23:24:23 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 71210 ]] 00:06:44.417 23:24:23 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 71210 00:06:44.417 23:24:23 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 71210 ']' 00:06:44.417 23:24:23 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 71210 00:06:44.417 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (71210) - No such process 00:06:44.417 23:24:23 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 71210 is not found' 00:06:44.417 23:24:23 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:44.417 00:06:44.417 real 0m19.927s 00:06:44.417 user 0m32.684s 00:06:44.417 sys 0m6.366s 00:06:44.417 23:24:23 event.cpu_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:44.417 ************************************ 00:06:44.417 END TEST cpu_locks 00:06:44.417 ************************************ 00:06:44.417 23:24:23 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:44.417 ************************************ 00:06:44.417 END TEST event 00:06:44.417 ************************************ 00:06:44.417 00:06:44.417 real 0m47.332s 00:06:44.417 user 1m28.221s 00:06:44.417 sys 0m10.003s 00:06:44.417 23:24:23 event -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:44.417 23:24:23 event -- common/autotest_common.sh@10 -- # set +x 00:06:44.417 23:24:24 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:44.417 23:24:24 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:44.417 23:24:24 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:44.417 23:24:24 -- common/autotest_common.sh@10 -- # set +x 00:06:44.417 ************************************ 00:06:44.417 START TEST thread 00:06:44.417 ************************************ 00:06:44.417 23:24:24 thread -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:44.417 * Looking for test storage... 00:06:44.417 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:06:44.417 23:24:24 thread -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:44.417 23:24:24 thread -- common/autotest_common.sh@1681 -- # lcov --version 00:06:44.417 23:24:24 thread -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:44.676 23:24:24 thread -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:44.676 23:24:24 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:44.676 23:24:24 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:44.676 23:24:24 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:44.676 23:24:24 thread -- scripts/common.sh@336 -- # IFS=.-: 00:06:44.676 23:24:24 thread -- scripts/common.sh@336 -- # read -ra ver1 00:06:44.676 23:24:24 thread -- scripts/common.sh@337 -- # IFS=.-: 00:06:44.676 23:24:24 thread -- scripts/common.sh@337 -- # read -ra ver2 00:06:44.676 23:24:24 thread -- scripts/common.sh@338 -- # local 'op=<' 00:06:44.676 23:24:24 thread -- scripts/common.sh@340 -- # ver1_l=2 00:06:44.676 23:24:24 thread -- scripts/common.sh@341 -- # ver2_l=1 00:06:44.676 23:24:24 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:44.676 23:24:24 thread -- scripts/common.sh@344 -- # case "$op" in 00:06:44.676 23:24:24 thread -- scripts/common.sh@345 -- # : 1 00:06:44.676 23:24:24 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:44.676 23:24:24 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:44.676 23:24:24 thread -- scripts/common.sh@365 -- # decimal 1 00:06:44.676 23:24:24 thread -- scripts/common.sh@353 -- # local d=1 00:06:44.676 23:24:24 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:44.676 23:24:24 thread -- scripts/common.sh@355 -- # echo 1 00:06:44.676 23:24:24 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:06:44.676 23:24:24 thread -- scripts/common.sh@366 -- # decimal 2 00:06:44.676 23:24:24 thread -- scripts/common.sh@353 -- # local d=2 00:06:44.676 23:24:24 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:44.676 23:24:24 thread -- scripts/common.sh@355 -- # echo 2 00:06:44.676 23:24:24 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:06:44.676 23:24:24 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:44.676 23:24:24 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:44.676 23:24:24 thread -- scripts/common.sh@368 -- # return 0 00:06:44.676 23:24:24 thread -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:44.676 23:24:24 thread -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:44.676 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:44.676 --rc genhtml_branch_coverage=1 00:06:44.676 --rc genhtml_function_coverage=1 00:06:44.676 --rc genhtml_legend=1 00:06:44.676 --rc geninfo_all_blocks=1 00:06:44.676 --rc geninfo_unexecuted_blocks=1 00:06:44.676 00:06:44.676 ' 00:06:44.676 23:24:24 thread -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:44.676 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:44.676 --rc genhtml_branch_coverage=1 00:06:44.676 --rc genhtml_function_coverage=1 00:06:44.676 --rc genhtml_legend=1 00:06:44.676 --rc geninfo_all_blocks=1 00:06:44.676 --rc geninfo_unexecuted_blocks=1 00:06:44.676 00:06:44.676 ' 00:06:44.676 23:24:24 thread -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:44.676 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:44.676 --rc genhtml_branch_coverage=1 00:06:44.676 --rc genhtml_function_coverage=1 00:06:44.676 --rc genhtml_legend=1 00:06:44.676 --rc geninfo_all_blocks=1 00:06:44.676 --rc geninfo_unexecuted_blocks=1 00:06:44.676 00:06:44.676 ' 00:06:44.676 23:24:24 thread -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:44.676 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:44.676 --rc genhtml_branch_coverage=1 00:06:44.676 --rc genhtml_function_coverage=1 00:06:44.676 --rc genhtml_legend=1 00:06:44.676 --rc geninfo_all_blocks=1 00:06:44.676 --rc geninfo_unexecuted_blocks=1 00:06:44.676 00:06:44.676 ' 00:06:44.676 23:24:24 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:44.676 23:24:24 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:06:44.676 23:24:24 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:44.676 23:24:24 thread -- common/autotest_common.sh@10 -- # set +x 00:06:44.676 ************************************ 00:06:44.676 START TEST thread_poller_perf 00:06:44.676 ************************************ 00:06:44.676 23:24:24 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:44.676 [2024-09-30 23:24:24.350313] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:06:44.676 [2024-09-30 23:24:24.350440] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71346 ] 00:06:44.676 [2024-09-30 23:24:24.513165] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:44.935 [2024-09-30 23:24:24.557027] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:44.935 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:45.928 ====================================== 00:06:45.928 busy:2297862174 (cyc) 00:06:45.928 total_run_count: 416000 00:06:45.928 tsc_hz: 2290000000 (cyc) 00:06:45.928 ====================================== 00:06:45.928 poller_cost: 5523 (cyc), 2411 (nsec) 00:06:45.928 00:06:45.928 real 0m1.352s 00:06:45.928 user 0m1.146s 00:06:45.928 sys 0m0.100s 00:06:45.928 23:24:25 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:45.928 ************************************ 00:06:45.928 END TEST thread_poller_perf 00:06:45.928 ************************************ 00:06:45.928 23:24:25 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:45.928 23:24:25 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:45.928 23:24:25 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:06:45.928 23:24:25 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:45.928 23:24:25 thread -- common/autotest_common.sh@10 -- # set +x 00:06:45.928 ************************************ 00:06:45.928 START TEST thread_poller_perf 00:06:45.928 ************************************ 00:06:45.928 23:24:25 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:45.928 [2024-09-30 23:24:25.775699] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:06:45.928 [2024-09-30 23:24:25.775848] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71382 ] 00:06:46.188 [2024-09-30 23:24:25.936245] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:46.188 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:46.188 [2024-09-30 23:24:25.983595] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:47.566 ====================================== 00:06:47.566 busy:2293252784 (cyc) 00:06:47.566 total_run_count: 5610000 00:06:47.566 tsc_hz: 2290000000 (cyc) 00:06:47.566 ====================================== 00:06:47.566 poller_cost: 408 (cyc), 178 (nsec) 00:06:47.566 00:06:47.566 real 0m1.349s 00:06:47.566 user 0m1.142s 00:06:47.566 sys 0m0.101s 00:06:47.566 23:24:27 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:47.566 23:24:27 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:47.566 ************************************ 00:06:47.566 END TEST thread_poller_perf 00:06:47.566 ************************************ 00:06:47.566 23:24:27 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:47.566 ************************************ 00:06:47.566 END TEST thread 00:06:47.566 ************************************ 00:06:47.566 00:06:47.566 real 0m3.071s 00:06:47.566 user 0m2.456s 00:06:47.566 sys 0m0.414s 00:06:47.566 23:24:27 thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:47.566 23:24:27 thread -- common/autotest_common.sh@10 -- # set +x 00:06:47.566 23:24:27 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:06:47.566 23:24:27 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:47.566 23:24:27 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:47.566 23:24:27 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:47.566 23:24:27 -- common/autotest_common.sh@10 -- # set +x 00:06:47.566 ************************************ 00:06:47.566 START TEST app_cmdline 00:06:47.566 ************************************ 00:06:47.566 23:24:27 app_cmdline -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:47.566 * Looking for test storage... 00:06:47.566 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:47.566 23:24:27 app_cmdline -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:47.566 23:24:27 app_cmdline -- common/autotest_common.sh@1681 -- # lcov --version 00:06:47.566 23:24:27 app_cmdline -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:47.566 23:24:27 app_cmdline -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:47.566 23:24:27 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:47.566 23:24:27 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:47.566 23:24:27 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:47.566 23:24:27 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:06:47.566 23:24:27 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:06:47.566 23:24:27 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:06:47.566 23:24:27 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:06:47.566 23:24:27 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:06:47.566 23:24:27 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:06:47.566 23:24:27 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:06:47.566 23:24:27 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:47.566 23:24:27 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:06:47.566 23:24:27 app_cmdline -- scripts/common.sh@345 -- # : 1 00:06:47.566 23:24:27 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:47.566 23:24:27 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:47.566 23:24:27 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:06:47.566 23:24:27 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:06:47.566 23:24:27 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:47.566 23:24:27 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:06:47.826 23:24:27 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:06:47.826 23:24:27 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:06:47.826 23:24:27 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:06:47.826 23:24:27 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:47.826 23:24:27 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:06:47.826 23:24:27 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:06:47.826 23:24:27 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:47.826 23:24:27 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:47.826 23:24:27 app_cmdline -- scripts/common.sh@368 -- # return 0 00:06:47.826 23:24:27 app_cmdline -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:47.826 23:24:27 app_cmdline -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:47.826 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:47.826 --rc genhtml_branch_coverage=1 00:06:47.826 --rc genhtml_function_coverage=1 00:06:47.826 --rc genhtml_legend=1 00:06:47.826 --rc geninfo_all_blocks=1 00:06:47.826 --rc geninfo_unexecuted_blocks=1 00:06:47.826 00:06:47.826 ' 00:06:47.826 23:24:27 app_cmdline -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:47.826 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:47.826 --rc genhtml_branch_coverage=1 00:06:47.826 --rc genhtml_function_coverage=1 00:06:47.826 --rc genhtml_legend=1 00:06:47.826 --rc geninfo_all_blocks=1 00:06:47.826 --rc geninfo_unexecuted_blocks=1 00:06:47.826 00:06:47.826 ' 00:06:47.826 23:24:27 app_cmdline -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:47.826 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:47.826 --rc genhtml_branch_coverage=1 00:06:47.826 --rc genhtml_function_coverage=1 00:06:47.826 --rc genhtml_legend=1 00:06:47.826 --rc geninfo_all_blocks=1 00:06:47.826 --rc geninfo_unexecuted_blocks=1 00:06:47.826 00:06:47.826 ' 00:06:47.826 23:24:27 app_cmdline -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:47.826 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:47.826 --rc genhtml_branch_coverage=1 00:06:47.826 --rc genhtml_function_coverage=1 00:06:47.826 --rc genhtml_legend=1 00:06:47.826 --rc geninfo_all_blocks=1 00:06:47.826 --rc geninfo_unexecuted_blocks=1 00:06:47.826 00:06:47.826 ' 00:06:47.826 23:24:27 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:47.826 23:24:27 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=71466 00:06:47.826 23:24:27 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:47.826 23:24:27 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 71466 00:06:47.826 23:24:27 app_cmdline -- common/autotest_common.sh@831 -- # '[' -z 71466 ']' 00:06:47.826 23:24:27 app_cmdline -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:47.826 23:24:27 app_cmdline -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:47.826 23:24:27 app_cmdline -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:47.826 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:47.826 23:24:27 app_cmdline -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:47.826 23:24:27 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:47.826 [2024-09-30 23:24:27.522877] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:06:47.826 [2024-09-30 23:24:27.523095] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71466 ] 00:06:48.086 [2024-09-30 23:24:27.680196] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:48.086 [2024-09-30 23:24:27.727942] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:48.654 23:24:28 app_cmdline -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:48.654 23:24:28 app_cmdline -- common/autotest_common.sh@864 -- # return 0 00:06:48.654 23:24:28 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:06:48.654 { 00:06:48.654 "version": "SPDK v25.01-pre git sha1 09cc66129", 00:06:48.654 "fields": { 00:06:48.654 "major": 25, 00:06:48.654 "minor": 1, 00:06:48.654 "patch": 0, 00:06:48.654 "suffix": "-pre", 00:06:48.654 "commit": "09cc66129" 00:06:48.654 } 00:06:48.654 } 00:06:48.913 23:24:28 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:06:48.913 23:24:28 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:48.913 23:24:28 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:48.913 23:24:28 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:48.913 23:24:28 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:48.913 23:24:28 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:48.913 23:24:28 app_cmdline -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:48.913 23:24:28 app_cmdline -- app/cmdline.sh@26 -- # sort 00:06:48.913 23:24:28 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:48.913 23:24:28 app_cmdline -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:48.913 23:24:28 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:48.913 23:24:28 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:48.913 23:24:28 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:48.913 23:24:28 app_cmdline -- common/autotest_common.sh@650 -- # local es=0 00:06:48.913 23:24:28 app_cmdline -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:48.913 23:24:28 app_cmdline -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:48.913 23:24:28 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:48.913 23:24:28 app_cmdline -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:48.913 23:24:28 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:48.913 23:24:28 app_cmdline -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:48.913 23:24:28 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:48.913 23:24:28 app_cmdline -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:48.913 23:24:28 app_cmdline -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:06:48.913 23:24:28 app_cmdline -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:48.913 request: 00:06:48.913 { 00:06:48.913 "method": "env_dpdk_get_mem_stats", 00:06:48.913 "req_id": 1 00:06:48.913 } 00:06:48.913 Got JSON-RPC error response 00:06:48.913 response: 00:06:48.913 { 00:06:48.913 "code": -32601, 00:06:48.913 "message": "Method not found" 00:06:48.913 } 00:06:49.172 23:24:28 app_cmdline -- common/autotest_common.sh@653 -- # es=1 00:06:49.172 23:24:28 app_cmdline -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:49.172 23:24:28 app_cmdline -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:49.172 23:24:28 app_cmdline -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:49.172 23:24:28 app_cmdline -- app/cmdline.sh@1 -- # killprocess 71466 00:06:49.172 23:24:28 app_cmdline -- common/autotest_common.sh@950 -- # '[' -z 71466 ']' 00:06:49.172 23:24:28 app_cmdline -- common/autotest_common.sh@954 -- # kill -0 71466 00:06:49.172 23:24:28 app_cmdline -- common/autotest_common.sh@955 -- # uname 00:06:49.172 23:24:28 app_cmdline -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:49.172 23:24:28 app_cmdline -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71466 00:06:49.172 killing process with pid 71466 00:06:49.172 23:24:28 app_cmdline -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:49.172 23:24:28 app_cmdline -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:49.172 23:24:28 app_cmdline -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71466' 00:06:49.172 23:24:28 app_cmdline -- common/autotest_common.sh@969 -- # kill 71466 00:06:49.172 23:24:28 app_cmdline -- common/autotest_common.sh@974 -- # wait 71466 00:06:49.431 ************************************ 00:06:49.431 END TEST app_cmdline 00:06:49.431 ************************************ 00:06:49.431 00:06:49.431 real 0m1.996s 00:06:49.431 user 0m2.192s 00:06:49.431 sys 0m0.565s 00:06:49.431 23:24:29 app_cmdline -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:49.431 23:24:29 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:49.431 23:24:29 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:49.431 23:24:29 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:49.431 23:24:29 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:49.431 23:24:29 -- common/autotest_common.sh@10 -- # set +x 00:06:49.431 ************************************ 00:06:49.431 START TEST version 00:06:49.431 ************************************ 00:06:49.431 23:24:29 version -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:49.691 * Looking for test storage... 00:06:49.691 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:49.691 23:24:29 version -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:49.691 23:24:29 version -- common/autotest_common.sh@1681 -- # lcov --version 00:06:49.691 23:24:29 version -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:49.691 23:24:29 version -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:49.691 23:24:29 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:49.691 23:24:29 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:49.691 23:24:29 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:49.691 23:24:29 version -- scripts/common.sh@336 -- # IFS=.-: 00:06:49.691 23:24:29 version -- scripts/common.sh@336 -- # read -ra ver1 00:06:49.691 23:24:29 version -- scripts/common.sh@337 -- # IFS=.-: 00:06:49.691 23:24:29 version -- scripts/common.sh@337 -- # read -ra ver2 00:06:49.691 23:24:29 version -- scripts/common.sh@338 -- # local 'op=<' 00:06:49.691 23:24:29 version -- scripts/common.sh@340 -- # ver1_l=2 00:06:49.691 23:24:29 version -- scripts/common.sh@341 -- # ver2_l=1 00:06:49.691 23:24:29 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:49.691 23:24:29 version -- scripts/common.sh@344 -- # case "$op" in 00:06:49.691 23:24:29 version -- scripts/common.sh@345 -- # : 1 00:06:49.691 23:24:29 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:49.691 23:24:29 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:49.691 23:24:29 version -- scripts/common.sh@365 -- # decimal 1 00:06:49.691 23:24:29 version -- scripts/common.sh@353 -- # local d=1 00:06:49.691 23:24:29 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:49.691 23:24:29 version -- scripts/common.sh@355 -- # echo 1 00:06:49.691 23:24:29 version -- scripts/common.sh@365 -- # ver1[v]=1 00:06:49.691 23:24:29 version -- scripts/common.sh@366 -- # decimal 2 00:06:49.691 23:24:29 version -- scripts/common.sh@353 -- # local d=2 00:06:49.691 23:24:29 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:49.691 23:24:29 version -- scripts/common.sh@355 -- # echo 2 00:06:49.691 23:24:29 version -- scripts/common.sh@366 -- # ver2[v]=2 00:06:49.691 23:24:29 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:49.691 23:24:29 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:49.691 23:24:29 version -- scripts/common.sh@368 -- # return 0 00:06:49.691 23:24:29 version -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:49.691 23:24:29 version -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:49.691 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:49.691 --rc genhtml_branch_coverage=1 00:06:49.691 --rc genhtml_function_coverage=1 00:06:49.691 --rc genhtml_legend=1 00:06:49.691 --rc geninfo_all_blocks=1 00:06:49.691 --rc geninfo_unexecuted_blocks=1 00:06:49.691 00:06:49.691 ' 00:06:49.691 23:24:29 version -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:49.691 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:49.691 --rc genhtml_branch_coverage=1 00:06:49.691 --rc genhtml_function_coverage=1 00:06:49.691 --rc genhtml_legend=1 00:06:49.691 --rc geninfo_all_blocks=1 00:06:49.691 --rc geninfo_unexecuted_blocks=1 00:06:49.691 00:06:49.691 ' 00:06:49.691 23:24:29 version -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:49.691 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:49.691 --rc genhtml_branch_coverage=1 00:06:49.691 --rc genhtml_function_coverage=1 00:06:49.691 --rc genhtml_legend=1 00:06:49.691 --rc geninfo_all_blocks=1 00:06:49.691 --rc geninfo_unexecuted_blocks=1 00:06:49.691 00:06:49.691 ' 00:06:49.691 23:24:29 version -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:49.691 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:49.691 --rc genhtml_branch_coverage=1 00:06:49.691 --rc genhtml_function_coverage=1 00:06:49.691 --rc genhtml_legend=1 00:06:49.691 --rc geninfo_all_blocks=1 00:06:49.691 --rc geninfo_unexecuted_blocks=1 00:06:49.691 00:06:49.691 ' 00:06:49.691 23:24:29 version -- app/version.sh@17 -- # get_header_version major 00:06:49.691 23:24:29 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:49.691 23:24:29 version -- app/version.sh@14 -- # cut -f2 00:06:49.691 23:24:29 version -- app/version.sh@14 -- # tr -d '"' 00:06:49.691 23:24:29 version -- app/version.sh@17 -- # major=25 00:06:49.691 23:24:29 version -- app/version.sh@18 -- # get_header_version minor 00:06:49.691 23:24:29 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:49.691 23:24:29 version -- app/version.sh@14 -- # cut -f2 00:06:49.691 23:24:29 version -- app/version.sh@14 -- # tr -d '"' 00:06:49.691 23:24:29 version -- app/version.sh@18 -- # minor=1 00:06:49.691 23:24:29 version -- app/version.sh@19 -- # get_header_version patch 00:06:49.691 23:24:29 version -- app/version.sh@14 -- # cut -f2 00:06:49.691 23:24:29 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:49.691 23:24:29 version -- app/version.sh@14 -- # tr -d '"' 00:06:49.691 23:24:29 version -- app/version.sh@19 -- # patch=0 00:06:49.691 23:24:29 version -- app/version.sh@20 -- # get_header_version suffix 00:06:49.691 23:24:29 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:49.691 23:24:29 version -- app/version.sh@14 -- # cut -f2 00:06:49.691 23:24:29 version -- app/version.sh@14 -- # tr -d '"' 00:06:49.691 23:24:29 version -- app/version.sh@20 -- # suffix=-pre 00:06:49.691 23:24:29 version -- app/version.sh@22 -- # version=25.1 00:06:49.691 23:24:29 version -- app/version.sh@25 -- # (( patch != 0 )) 00:06:49.691 23:24:29 version -- app/version.sh@28 -- # version=25.1rc0 00:06:49.691 23:24:29 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:06:49.691 23:24:29 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:49.950 23:24:29 version -- app/version.sh@30 -- # py_version=25.1rc0 00:06:49.950 23:24:29 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:06:49.950 ************************************ 00:06:49.950 END TEST version 00:06:49.950 ************************************ 00:06:49.950 00:06:49.950 real 0m0.326s 00:06:49.950 user 0m0.191s 00:06:49.950 sys 0m0.190s 00:06:49.950 23:24:29 version -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:49.950 23:24:29 version -- common/autotest_common.sh@10 -- # set +x 00:06:49.950 23:24:29 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:06:49.950 23:24:29 -- spdk/autotest.sh@188 -- # [[ 1 -eq 1 ]] 00:06:49.950 23:24:29 -- spdk/autotest.sh@189 -- # run_test bdev_raid /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:06:49.950 23:24:29 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:49.950 23:24:29 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:49.950 23:24:29 -- common/autotest_common.sh@10 -- # set +x 00:06:49.950 ************************************ 00:06:49.950 START TEST bdev_raid 00:06:49.950 ************************************ 00:06:49.950 23:24:29 bdev_raid -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:06:49.950 * Looking for test storage... 00:06:49.950 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:06:49.950 23:24:29 bdev_raid -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:49.950 23:24:29 bdev_raid -- common/autotest_common.sh@1681 -- # lcov --version 00:06:49.950 23:24:29 bdev_raid -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:50.209 23:24:29 bdev_raid -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:50.209 23:24:29 bdev_raid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:50.209 23:24:29 bdev_raid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:50.209 23:24:29 bdev_raid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:50.209 23:24:29 bdev_raid -- scripts/common.sh@336 -- # IFS=.-: 00:06:50.209 23:24:29 bdev_raid -- scripts/common.sh@336 -- # read -ra ver1 00:06:50.209 23:24:29 bdev_raid -- scripts/common.sh@337 -- # IFS=.-: 00:06:50.209 23:24:29 bdev_raid -- scripts/common.sh@337 -- # read -ra ver2 00:06:50.209 23:24:29 bdev_raid -- scripts/common.sh@338 -- # local 'op=<' 00:06:50.209 23:24:29 bdev_raid -- scripts/common.sh@340 -- # ver1_l=2 00:06:50.209 23:24:29 bdev_raid -- scripts/common.sh@341 -- # ver2_l=1 00:06:50.209 23:24:29 bdev_raid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:50.209 23:24:29 bdev_raid -- scripts/common.sh@344 -- # case "$op" in 00:06:50.209 23:24:29 bdev_raid -- scripts/common.sh@345 -- # : 1 00:06:50.209 23:24:29 bdev_raid -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:50.209 23:24:29 bdev_raid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:50.209 23:24:29 bdev_raid -- scripts/common.sh@365 -- # decimal 1 00:06:50.209 23:24:29 bdev_raid -- scripts/common.sh@353 -- # local d=1 00:06:50.209 23:24:29 bdev_raid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:50.209 23:24:29 bdev_raid -- scripts/common.sh@355 -- # echo 1 00:06:50.209 23:24:29 bdev_raid -- scripts/common.sh@365 -- # ver1[v]=1 00:06:50.209 23:24:29 bdev_raid -- scripts/common.sh@366 -- # decimal 2 00:06:50.209 23:24:29 bdev_raid -- scripts/common.sh@353 -- # local d=2 00:06:50.209 23:24:29 bdev_raid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:50.209 23:24:29 bdev_raid -- scripts/common.sh@355 -- # echo 2 00:06:50.209 23:24:29 bdev_raid -- scripts/common.sh@366 -- # ver2[v]=2 00:06:50.209 23:24:29 bdev_raid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:50.209 23:24:29 bdev_raid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:50.209 23:24:29 bdev_raid -- scripts/common.sh@368 -- # return 0 00:06:50.209 23:24:29 bdev_raid -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:50.209 23:24:29 bdev_raid -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:50.209 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:50.209 --rc genhtml_branch_coverage=1 00:06:50.209 --rc genhtml_function_coverage=1 00:06:50.209 --rc genhtml_legend=1 00:06:50.209 --rc geninfo_all_blocks=1 00:06:50.209 --rc geninfo_unexecuted_blocks=1 00:06:50.209 00:06:50.209 ' 00:06:50.209 23:24:29 bdev_raid -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:50.209 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:50.209 --rc genhtml_branch_coverage=1 00:06:50.209 --rc genhtml_function_coverage=1 00:06:50.209 --rc genhtml_legend=1 00:06:50.209 --rc geninfo_all_blocks=1 00:06:50.209 --rc geninfo_unexecuted_blocks=1 00:06:50.209 00:06:50.209 ' 00:06:50.209 23:24:29 bdev_raid -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:50.209 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:50.209 --rc genhtml_branch_coverage=1 00:06:50.209 --rc genhtml_function_coverage=1 00:06:50.209 --rc genhtml_legend=1 00:06:50.209 --rc geninfo_all_blocks=1 00:06:50.209 --rc geninfo_unexecuted_blocks=1 00:06:50.209 00:06:50.209 ' 00:06:50.209 23:24:29 bdev_raid -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:50.209 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:50.209 --rc genhtml_branch_coverage=1 00:06:50.209 --rc genhtml_function_coverage=1 00:06:50.209 --rc genhtml_legend=1 00:06:50.209 --rc geninfo_all_blocks=1 00:06:50.209 --rc geninfo_unexecuted_blocks=1 00:06:50.209 00:06:50.209 ' 00:06:50.209 23:24:29 bdev_raid -- bdev/bdev_raid.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:06:50.209 23:24:29 bdev_raid -- bdev/nbd_common.sh@6 -- # set -e 00:06:50.209 23:24:29 bdev_raid -- bdev/bdev_raid.sh@14 -- # rpc_py=rpc_cmd 00:06:50.209 23:24:29 bdev_raid -- bdev/bdev_raid.sh@946 -- # mkdir -p /raidtest 00:06:50.209 23:24:29 bdev_raid -- bdev/bdev_raid.sh@947 -- # trap 'cleanup; exit 1' EXIT 00:06:50.209 23:24:29 bdev_raid -- bdev/bdev_raid.sh@949 -- # base_blocklen=512 00:06:50.209 23:24:29 bdev_raid -- bdev/bdev_raid.sh@951 -- # run_test raid1_resize_data_offset_test raid_resize_data_offset_test 00:06:50.209 23:24:29 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:50.209 23:24:29 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:50.209 23:24:29 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:50.209 ************************************ 00:06:50.209 START TEST raid1_resize_data_offset_test 00:06:50.209 ************************************ 00:06:50.209 23:24:29 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@1125 -- # raid_resize_data_offset_test 00:06:50.209 23:24:29 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@917 -- # raid_pid=71631 00:06:50.209 23:24:29 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@916 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:06:50.209 Process raid pid: 71631 00:06:50.209 23:24:29 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@918 -- # echo 'Process raid pid: 71631' 00:06:50.209 23:24:29 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@919 -- # waitforlisten 71631 00:06:50.209 23:24:29 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@831 -- # '[' -z 71631 ']' 00:06:50.209 23:24:29 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:50.209 23:24:29 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:50.209 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:50.209 23:24:29 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:50.209 23:24:29 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:50.209 23:24:29 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:50.209 [2024-09-30 23:24:29.979910] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:06:50.209 [2024-09-30 23:24:29.980053] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:50.468 [2024-09-30 23:24:30.142568] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:50.468 [2024-09-30 23:24:30.185515] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:50.468 [2024-09-30 23:24:30.225995] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:50.468 [2024-09-30 23:24:30.226031] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:51.036 23:24:30 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:51.036 23:24:30 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@864 -- # return 0 00:06:51.036 23:24:30 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@922 -- # rpc_cmd bdev_malloc_create -b malloc0 64 512 -o 16 00:06:51.036 23:24:30 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:51.036 23:24:30 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:51.036 malloc0 00:06:51.036 23:24:30 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:51.036 23:24:30 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@923 -- # rpc_cmd bdev_malloc_create -b malloc1 64 512 -o 16 00:06:51.036 23:24:30 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:51.036 23:24:30 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:51.036 malloc1 00:06:51.036 23:24:30 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:51.036 23:24:30 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@924 -- # rpc_cmd bdev_null_create null0 64 512 00:06:51.036 23:24:30 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:51.036 23:24:30 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:51.036 null0 00:06:51.036 23:24:30 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:51.036 23:24:30 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@926 -- # rpc_cmd bdev_raid_create -n Raid -r 1 -b ''\''malloc0 malloc1 null0'\''' -s 00:06:51.036 23:24:30 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:51.036 23:24:30 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:51.036 [2024-09-30 23:24:30.879195] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc0 is claimed 00:06:51.036 [2024-09-30 23:24:30.880999] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:06:51.036 [2024-09-30 23:24:30.881041] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev null0 is claimed 00:06:51.036 [2024-09-30 23:24:30.881170] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:06:51.036 [2024-09-30 23:24:30.881186] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 129024, blocklen 512 00:06:51.036 [2024-09-30 23:24:30.881421] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005a00 00:06:51.036 [2024-09-30 23:24:30.881564] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:06:51.036 [2024-09-30 23:24:30.881587] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000006280 00:06:51.036 [2024-09-30 23:24:30.881725] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:51.036 23:24:30 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:51.296 23:24:30 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:51.296 23:24:30 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # jq -r '.[].base_bdevs_list[2].data_offset' 00:06:51.296 23:24:30 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:51.296 23:24:30 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:51.296 23:24:30 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:51.296 23:24:30 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # (( 2048 == 2048 )) 00:06:51.296 23:24:30 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@931 -- # rpc_cmd bdev_null_delete null0 00:06:51.296 23:24:30 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:51.296 23:24:30 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:51.296 [2024-09-30 23:24:30.939109] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: null0 00:06:51.296 23:24:30 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:51.296 23:24:30 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@935 -- # rpc_cmd bdev_malloc_create -b malloc2 512 512 -o 30 00:06:51.296 23:24:30 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:51.296 23:24:30 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:51.296 malloc2 00:06:51.296 23:24:31 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:51.296 23:24:31 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@936 -- # rpc_cmd bdev_raid_add_base_bdev Raid malloc2 00:06:51.296 23:24:31 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:51.296 23:24:31 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:51.296 [2024-09-30 23:24:31.060319] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:06:51.296 [2024-09-30 23:24:31.064731] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:06:51.296 23:24:31 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:51.296 [2024-09-30 23:24:31.066615] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev Raid 00:06:51.296 23:24:31 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # jq -r '.[].base_bdevs_list[2].data_offset' 00:06:51.296 23:24:31 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:51.296 23:24:31 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:51.296 23:24:31 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:51.296 23:24:31 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:51.296 23:24:31 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # (( 2070 == 2070 )) 00:06:51.296 23:24:31 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@941 -- # killprocess 71631 00:06:51.296 23:24:31 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@950 -- # '[' -z 71631 ']' 00:06:51.297 23:24:31 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@954 -- # kill -0 71631 00:06:51.297 23:24:31 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@955 -- # uname 00:06:51.297 23:24:31 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:51.297 23:24:31 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71631 00:06:51.297 23:24:31 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:51.297 23:24:31 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:51.297 killing process with pid 71631 00:06:51.297 23:24:31 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71631' 00:06:51.297 23:24:31 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@969 -- # kill 71631 00:06:51.297 23:24:31 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@974 -- # wait 71631 00:06:51.297 [2024-09-30 23:24:31.148131] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:51.556 [2024-09-30 23:24:31.149727] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev Raid: Operation canceled 00:06:51.556 [2024-09-30 23:24:31.149793] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:51.556 [2024-09-30 23:24:31.149826] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: malloc2 00:06:51.556 [2024-09-30 23:24:31.155344] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:51.556 [2024-09-30 23:24:31.155612] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:51.556 [2024-09-30 23:24:31.155633] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Raid, state offline 00:06:51.556 [2024-09-30 23:24:31.366510] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:51.816 23:24:31 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@943 -- # return 0 00:06:51.816 00:06:51.816 real 0m1.736s 00:06:51.816 user 0m1.729s 00:06:51.816 sys 0m0.444s 00:06:51.816 23:24:31 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:51.816 23:24:31 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:51.816 ************************************ 00:06:51.816 END TEST raid1_resize_data_offset_test 00:06:51.816 ************************************ 00:06:52.075 23:24:31 bdev_raid -- bdev/bdev_raid.sh@953 -- # run_test raid0_resize_superblock_test raid_resize_superblock_test 0 00:06:52.075 23:24:31 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:06:52.075 23:24:31 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:52.075 23:24:31 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:52.075 ************************************ 00:06:52.075 START TEST raid0_resize_superblock_test 00:06:52.075 ************************************ 00:06:52.075 23:24:31 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@1125 -- # raid_resize_superblock_test 0 00:06:52.075 23:24:31 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@854 -- # local raid_level=0 00:06:52.075 23:24:31 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@857 -- # raid_pid=71684 00:06:52.075 23:24:31 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@856 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:06:52.075 Process raid pid: 71684 00:06:52.075 23:24:31 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@858 -- # echo 'Process raid pid: 71684' 00:06:52.075 23:24:31 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@859 -- # waitforlisten 71684 00:06:52.075 23:24:31 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 71684 ']' 00:06:52.075 23:24:31 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:52.075 23:24:31 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:52.075 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:52.075 23:24:31 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:52.075 23:24:31 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:52.075 23:24:31 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:52.075 [2024-09-30 23:24:31.775360] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:06:52.075 [2024-09-30 23:24:31.775799] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:52.335 [2024-09-30 23:24:31.935223] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:52.335 [2024-09-30 23:24:32.001940] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:52.335 [2024-09-30 23:24:32.077505] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:52.335 [2024-09-30 23:24:32.077549] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:52.904 23:24:32 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:52.904 23:24:32 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:06:52.904 23:24:32 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@861 -- # rpc_cmd bdev_malloc_create -b malloc0 512 512 00:06:52.904 23:24:32 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:52.904 23:24:32 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:53.164 malloc0 00:06:53.164 23:24:32 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:53.164 23:24:32 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@863 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:06:53.164 23:24:32 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:53.164 23:24:32 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:53.164 [2024-09-30 23:24:32.808483] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:06:53.164 [2024-09-30 23:24:32.808581] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:53.164 [2024-09-30 23:24:32.808620] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:06:53.164 [2024-09-30 23:24:32.808637] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:53.164 [2024-09-30 23:24:32.811243] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:53.164 [2024-09-30 23:24:32.811287] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:06:53.164 pt0 00:06:53.164 23:24:32 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:53.164 23:24:32 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@864 -- # rpc_cmd bdev_lvol_create_lvstore pt0 lvs0 00:06:53.164 23:24:32 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:53.164 23:24:32 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:53.164 88d73b58-8c01-48fa-a6a3-df2454cbc627 00:06:53.164 23:24:32 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:53.164 23:24:32 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@866 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol0 64 00:06:53.164 23:24:32 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:53.164 23:24:32 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:53.164 21377ef5-d37f-4ab1-ae45-3c4211159d64 00:06:53.164 23:24:32 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:53.164 23:24:32 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@867 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol1 64 00:06:53.164 23:24:32 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:53.164 23:24:32 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:53.164 585abc03-6c37-4bc4-8f88-d6f6c5434941 00:06:53.164 23:24:33 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:53.164 23:24:33 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@869 -- # case $raid_level in 00:06:53.164 23:24:33 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@870 -- # rpc_cmd bdev_raid_create -n Raid -r 0 -z 64 -b ''\''lvs0/lvol0 lvs0/lvol1'\''' -s 00:06:53.164 23:24:33 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:53.164 23:24:33 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:53.423 [2024-09-30 23:24:33.016249] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev 21377ef5-d37f-4ab1-ae45-3c4211159d64 is claimed 00:06:53.423 [2024-09-30 23:24:33.016351] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev 585abc03-6c37-4bc4-8f88-d6f6c5434941 is claimed 00:06:53.423 [2024-09-30 23:24:33.016459] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:06:53.423 [2024-09-30 23:24:33.016473] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 245760, blocklen 512 00:06:53.423 [2024-09-30 23:24:33.016759] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:06:53.423 [2024-09-30 23:24:33.016959] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:06:53.423 [2024-09-30 23:24:33.016976] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000006280 00:06:53.423 [2024-09-30 23:24:33.017126] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:53.423 23:24:33 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:53.423 23:24:33 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:06:53.423 23:24:33 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # jq '.[].num_blocks' 00:06:53.423 23:24:33 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:53.423 23:24:33 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:53.423 23:24:33 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:53.423 23:24:33 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # (( 64 == 64 )) 00:06:53.423 23:24:33 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # jq '.[].num_blocks' 00:06:53.423 23:24:33 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:06:53.423 23:24:33 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:53.423 23:24:33 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:53.423 23:24:33 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:53.423 23:24:33 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # (( 64 == 64 )) 00:06:53.423 23:24:33 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:06:53.423 23:24:33 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:53.423 23:24:33 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:53.423 23:24:33 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:53.423 23:24:33 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:06:53.423 23:24:33 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # jq '.[].num_blocks' 00:06:53.424 [2024-09-30 23:24:33.104318] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:53.424 23:24:33 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:53.424 23:24:33 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:06:53.424 23:24:33 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:06:53.424 23:24:33 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # (( 245760 == 245760 )) 00:06:53.424 23:24:33 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@885 -- # rpc_cmd bdev_lvol_resize lvs0/lvol0 100 00:06:53.424 23:24:33 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:53.424 23:24:33 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:53.424 [2024-09-30 23:24:33.152131] bdev_raid.c:2313:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:06:53.424 [2024-09-30 23:24:33.152159] bdev_raid.c:2326:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '21377ef5-d37f-4ab1-ae45-3c4211159d64' was resized: old size 131072, new size 204800 00:06:53.424 23:24:33 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:53.424 23:24:33 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@886 -- # rpc_cmd bdev_lvol_resize lvs0/lvol1 100 00:06:53.424 23:24:33 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:53.424 23:24:33 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:53.424 [2024-09-30 23:24:33.160046] bdev_raid.c:2313:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:06:53.424 [2024-09-30 23:24:33.160071] bdev_raid.c:2326:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '585abc03-6c37-4bc4-8f88-d6f6c5434941' was resized: old size 131072, new size 204800 00:06:53.424 [2024-09-30 23:24:33.160094] bdev_raid.c:2340:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 245760 to 393216 00:06:53.424 23:24:33 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:53.424 23:24:33 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:06:53.424 23:24:33 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # jq '.[].num_blocks' 00:06:53.424 23:24:33 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:53.424 23:24:33 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:53.424 23:24:33 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:53.424 23:24:33 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # (( 100 == 100 )) 00:06:53.424 23:24:33 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:06:53.424 23:24:33 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:53.424 23:24:33 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:53.424 23:24:33 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # jq '.[].num_blocks' 00:06:53.424 23:24:33 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:53.424 23:24:33 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # (( 100 == 100 )) 00:06:53.424 23:24:33 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:53.424 23:24:33 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:53.424 23:24:33 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:53.424 23:24:33 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:53.424 23:24:33 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:53.424 23:24:33 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # jq '.[].num_blocks' 00:06:53.424 [2024-09-30 23:24:33.268044] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:53.684 23:24:33 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:53.684 23:24:33 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:53.684 23:24:33 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:53.684 23:24:33 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # (( 393216 == 393216 )) 00:06:53.684 23:24:33 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@898 -- # rpc_cmd bdev_passthru_delete pt0 00:06:53.684 23:24:33 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:53.684 23:24:33 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:53.684 [2024-09-30 23:24:33.319786] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev pt0 being removed: closing lvstore lvs0 00:06:53.684 [2024-09-30 23:24:33.319911] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol0 00:06:53.684 [2024-09-30 23:24:33.319927] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:06:53.684 [2024-09-30 23:24:33.319944] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol1 00:06:53.684 [2024-09-30 23:24:33.320086] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:53.684 [2024-09-30 23:24:33.320121] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:53.684 [2024-09-30 23:24:33.320133] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Raid, state offline 00:06:53.684 23:24:33 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:53.684 23:24:33 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@899 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:06:53.684 23:24:33 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:53.684 23:24:33 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:53.684 [2024-09-30 23:24:33.327655] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:06:53.684 [2024-09-30 23:24:33.327738] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:53.684 [2024-09-30 23:24:33.327761] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:06:53.684 [2024-09-30 23:24:33.327782] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:53.684 [2024-09-30 23:24:33.330286] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:53.684 [2024-09-30 23:24:33.330325] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:06:53.684 [2024-09-30 23:24:33.331896] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 21377ef5-d37f-4ab1-ae45-3c4211159d64 00:06:53.684 [2024-09-30 23:24:33.331971] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev 21377ef5-d37f-4ab1-ae45-3c4211159d64 is claimed 00:06:53.684 [2024-09-30 23:24:33.332065] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 585abc03-6c37-4bc4-8f88-d6f6c5434941 00:06:53.684 [2024-09-30 23:24:33.332096] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev 585abc03-6c37-4bc4-8f88-d6f6c5434941 is claimed 00:06:53.684 [2024-09-30 23:24:33.332215] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev 585abc03-6c37-4bc4-8f88-d6f6c5434941 (2) smaller than existing raid bdev Raid (3) 00:06:53.684 [2024-09-30 23:24:33.332244] bdev_raid.c:3884:raid_bdev_examine_done: *ERROR*: Failed to examine bdev 21377ef5-d37f-4ab1-ae45-3c4211159d64: File exists 00:06:53.684 [2024-09-30 23:24:33.332278] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006600 00:06:53.684 [2024-09-30 23:24:33.332288] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 393216, blocklen 512 00:06:53.684 pt0 00:06:53.684 [2024-09-30 23:24:33.332533] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:06:53.684 [2024-09-30 23:24:33.332665] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006600 00:06:53.684 [2024-09-30 23:24:33.332677] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000006600 00:06:53.684 [2024-09-30 23:24:33.332795] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:53.684 23:24:33 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:53.684 23:24:33 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@900 -- # rpc_cmd bdev_wait_for_examine 00:06:53.684 23:24:33 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:53.684 23:24:33 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:53.684 23:24:33 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:53.684 23:24:33 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:06:53.684 23:24:33 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:53.684 23:24:33 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:06:53.684 23:24:33 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # jq '.[].num_blocks' 00:06:53.684 23:24:33 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:53.684 23:24:33 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:53.684 [2024-09-30 23:24:33.348171] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:53.684 23:24:33 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:53.684 23:24:33 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:06:53.684 23:24:33 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:06:53.684 23:24:33 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # (( 393216 == 393216 )) 00:06:53.684 23:24:33 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@909 -- # killprocess 71684 00:06:53.684 23:24:33 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 71684 ']' 00:06:53.684 23:24:33 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@954 -- # kill -0 71684 00:06:53.684 23:24:33 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@955 -- # uname 00:06:53.684 23:24:33 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:53.684 23:24:33 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71684 00:06:53.684 23:24:33 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:53.684 23:24:33 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:53.684 killing process with pid 71684 00:06:53.684 23:24:33 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71684' 00:06:53.684 23:24:33 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@969 -- # kill 71684 00:06:53.684 [2024-09-30 23:24:33.433123] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:53.684 [2024-09-30 23:24:33.433203] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:53.685 [2024-09-30 23:24:33.433248] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:53.685 [2024-09-30 23:24:33.433258] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Raid, state offline 00:06:53.685 23:24:33 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@974 -- # wait 71684 00:06:53.944 [2024-09-30 23:24:33.740060] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:54.514 23:24:34 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@911 -- # return 0 00:06:54.514 00:06:54.514 real 0m2.404s 00:06:54.514 user 0m2.491s 00:06:54.514 sys 0m0.653s 00:06:54.514 23:24:34 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:54.514 23:24:34 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:54.514 ************************************ 00:06:54.514 END TEST raid0_resize_superblock_test 00:06:54.514 ************************************ 00:06:54.514 23:24:34 bdev_raid -- bdev/bdev_raid.sh@954 -- # run_test raid1_resize_superblock_test raid_resize_superblock_test 1 00:06:54.514 23:24:34 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:06:54.514 23:24:34 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:54.514 23:24:34 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:54.514 ************************************ 00:06:54.514 START TEST raid1_resize_superblock_test 00:06:54.514 ************************************ 00:06:54.514 23:24:34 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@1125 -- # raid_resize_superblock_test 1 00:06:54.514 23:24:34 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@854 -- # local raid_level=1 00:06:54.514 23:24:34 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@857 -- # raid_pid=71760 00:06:54.514 23:24:34 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@856 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:06:54.514 23:24:34 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@858 -- # echo 'Process raid pid: 71760' 00:06:54.514 Process raid pid: 71760 00:06:54.514 23:24:34 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@859 -- # waitforlisten 71760 00:06:54.514 23:24:34 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 71760 ']' 00:06:54.514 23:24:34 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:54.514 23:24:34 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:54.514 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:54.514 23:24:34 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:54.514 23:24:34 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:54.514 23:24:34 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:54.514 [2024-09-30 23:24:34.261069] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:06:54.515 [2024-09-30 23:24:34.261201] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:54.776 [2024-09-30 23:24:34.427612] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:54.776 [2024-09-30 23:24:34.494512] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:54.776 [2024-09-30 23:24:34.570103] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:54.776 [2024-09-30 23:24:34.570138] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:55.344 23:24:35 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:55.344 23:24:35 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:06:55.344 23:24:35 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@861 -- # rpc_cmd bdev_malloc_create -b malloc0 512 512 00:06:55.344 23:24:35 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:55.344 23:24:35 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:55.603 malloc0 00:06:55.604 23:24:35 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:55.604 23:24:35 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@863 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:06:55.604 23:24:35 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:55.604 23:24:35 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:55.604 [2024-09-30 23:24:35.284431] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:06:55.604 [2024-09-30 23:24:35.284515] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:55.604 [2024-09-30 23:24:35.284542] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:06:55.604 [2024-09-30 23:24:35.284553] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:55.604 [2024-09-30 23:24:35.286998] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:55.604 [2024-09-30 23:24:35.287036] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:06:55.604 pt0 00:06:55.604 23:24:35 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:55.604 23:24:35 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@864 -- # rpc_cmd bdev_lvol_create_lvstore pt0 lvs0 00:06:55.604 23:24:35 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:55.604 23:24:35 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:55.863 537b07da-a4c1-474b-a30b-4fd7b5eadaf6 00:06:55.863 23:24:35 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:55.863 23:24:35 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@866 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol0 64 00:06:55.863 23:24:35 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:55.863 23:24:35 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:55.863 11c88b34-17a4-467c-a8dc-8c4cf600314e 00:06:55.863 23:24:35 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:55.863 23:24:35 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@867 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol1 64 00:06:55.863 23:24:35 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:55.863 23:24:35 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:55.863 bbb1f839-a0b6-44d3-a1c0-59ecc5e1d202 00:06:55.863 23:24:35 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:55.863 23:24:35 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@869 -- # case $raid_level in 00:06:55.863 23:24:35 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@871 -- # rpc_cmd bdev_raid_create -n Raid -r 1 -b ''\''lvs0/lvol0 lvs0/lvol1'\''' -s 00:06:55.863 23:24:35 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:55.863 23:24:35 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:55.863 [2024-09-30 23:24:35.492102] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev 11c88b34-17a4-467c-a8dc-8c4cf600314e is claimed 00:06:55.863 [2024-09-30 23:24:35.492205] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev bbb1f839-a0b6-44d3-a1c0-59ecc5e1d202 is claimed 00:06:55.863 [2024-09-30 23:24:35.492326] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:06:55.863 [2024-09-30 23:24:35.492341] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 122880, blocklen 512 00:06:55.863 [2024-09-30 23:24:35.492615] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:06:55.863 [2024-09-30 23:24:35.492800] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:06:55.863 [2024-09-30 23:24:35.492828] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000006280 00:06:55.863 [2024-09-30 23:24:35.492984] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:55.863 23:24:35 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:55.863 23:24:35 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:06:55.863 23:24:35 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # jq '.[].num_blocks' 00:06:55.863 23:24:35 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:55.863 23:24:35 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:55.863 23:24:35 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:55.863 23:24:35 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # (( 64 == 64 )) 00:06:55.863 23:24:35 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:06:55.863 23:24:35 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:55.863 23:24:35 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:55.863 23:24:35 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # jq '.[].num_blocks' 00:06:55.863 23:24:35 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:55.863 23:24:35 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # (( 64 == 64 )) 00:06:55.863 23:24:35 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:06:55.863 23:24:35 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:55.863 23:24:35 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:06:55.863 23:24:35 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # jq '.[].num_blocks' 00:06:55.863 23:24:35 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:55.863 23:24:35 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:55.863 [2024-09-30 23:24:35.608171] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:55.863 23:24:35 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:55.863 23:24:35 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:06:55.863 23:24:35 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:06:55.863 23:24:35 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # (( 122880 == 122880 )) 00:06:55.863 23:24:35 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@885 -- # rpc_cmd bdev_lvol_resize lvs0/lvol0 100 00:06:55.863 23:24:35 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:55.863 23:24:35 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:55.863 [2024-09-30 23:24:35.651983] bdev_raid.c:2313:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:06:55.863 [2024-09-30 23:24:35.652009] bdev_raid.c:2326:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '11c88b34-17a4-467c-a8dc-8c4cf600314e' was resized: old size 131072, new size 204800 00:06:55.863 23:24:35 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:55.863 23:24:35 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@886 -- # rpc_cmd bdev_lvol_resize lvs0/lvol1 100 00:06:55.863 23:24:35 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:55.863 23:24:35 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:55.864 [2024-09-30 23:24:35.663864] bdev_raid.c:2313:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:06:55.864 [2024-09-30 23:24:35.663899] bdev_raid.c:2326:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'bbb1f839-a0b6-44d3-a1c0-59ecc5e1d202' was resized: old size 131072, new size 204800 00:06:55.864 [2024-09-30 23:24:35.663938] bdev_raid.c:2340:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 122880 to 196608 00:06:55.864 23:24:35 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:55.864 23:24:35 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:06:55.864 23:24:35 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # jq '.[].num_blocks' 00:06:55.864 23:24:35 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:55.864 23:24:35 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:55.864 23:24:35 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:56.123 23:24:35 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # (( 100 == 100 )) 00:06:56.123 23:24:35 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # jq '.[].num_blocks' 00:06:56.123 23:24:35 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:06:56.123 23:24:35 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:56.123 23:24:35 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:56.123 23:24:35 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:56.123 23:24:35 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # (( 100 == 100 )) 00:06:56.123 23:24:35 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:56.123 23:24:35 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # jq '.[].num_blocks' 00:06:56.123 23:24:35 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:56.123 23:24:35 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:56.123 23:24:35 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:56.123 23:24:35 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:56.123 [2024-09-30 23:24:35.751817] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:56.123 23:24:35 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:56.123 23:24:35 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:56.123 23:24:35 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:56.123 23:24:35 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # (( 196608 == 196608 )) 00:06:56.123 23:24:35 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@898 -- # rpc_cmd bdev_passthru_delete pt0 00:06:56.123 23:24:35 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:56.123 23:24:35 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:56.123 [2024-09-30 23:24:35.779621] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev pt0 being removed: closing lvstore lvs0 00:06:56.123 [2024-09-30 23:24:35.779708] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol0 00:06:56.123 [2024-09-30 23:24:35.779739] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol1 00:06:56.123 [2024-09-30 23:24:35.779894] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:06:56.123 [2024-09-30 23:24:35.780043] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:56.123 [2024-09-30 23:24:35.780102] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:56.123 [2024-09-30 23:24:35.780116] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Raid, state offline 00:06:56.123 23:24:35 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:56.123 23:24:35 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@899 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:06:56.123 23:24:35 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:56.123 23:24:35 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:56.123 [2024-09-30 23:24:35.791541] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:06:56.123 [2024-09-30 23:24:35.791601] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:56.123 [2024-09-30 23:24:35.791621] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:06:56.123 [2024-09-30 23:24:35.791634] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:56.123 [2024-09-30 23:24:35.793949] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:56.123 [2024-09-30 23:24:35.793986] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:06:56.123 [2024-09-30 23:24:35.795367] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 11c88b34-17a4-467c-a8dc-8c4cf600314e 00:06:56.123 [2024-09-30 23:24:35.795438] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev 11c88b34-17a4-467c-a8dc-8c4cf600314e is claimed 00:06:56.123 [2024-09-30 23:24:35.795519] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev bbb1f839-a0b6-44d3-a1c0-59ecc5e1d202 00:06:56.123 [2024-09-30 23:24:35.795544] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev bbb1f839-a0b6-44d3-a1c0-59ecc5e1d202 is claimed 00:06:56.123 [2024-09-30 23:24:35.795623] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev bbb1f839-a0b6-44d3-a1c0-59ecc5e1d202 (2) smaller than existing raid bdev Raid (3) 00:06:56.123 [2024-09-30 23:24:35.795644] bdev_raid.c:3884:raid_bdev_examine_done: *ERROR*: Failed to examine bdev 11c88b34-17a4-467c-a8dc-8c4cf600314e: File exists 00:06:56.124 [2024-09-30 23:24:35.795683] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006600 00:06:56.124 [2024-09-30 23:24:35.795692] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:06:56.124 [2024-09-30 23:24:35.795944] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:06:56.124 [2024-09-30 23:24:35.796072] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006600 00:06:56.124 [2024-09-30 23:24:35.796084] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000006600 00:06:56.124 [2024-09-30 23:24:35.796215] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:56.124 pt0 00:06:56.124 23:24:35 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:56.124 23:24:35 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@900 -- # rpc_cmd bdev_wait_for_examine 00:06:56.124 23:24:35 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:56.124 23:24:35 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:56.124 23:24:35 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:56.124 23:24:35 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:06:56.124 23:24:35 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # jq '.[].num_blocks' 00:06:56.124 23:24:35 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:06:56.124 23:24:35 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:56.124 23:24:35 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:56.124 23:24:35 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:56.124 [2024-09-30 23:24:35.819897] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:56.124 23:24:35 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:56.124 23:24:35 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:06:56.124 23:24:35 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:06:56.124 23:24:35 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # (( 196608 == 196608 )) 00:06:56.124 23:24:35 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@909 -- # killprocess 71760 00:06:56.124 23:24:35 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 71760 ']' 00:06:56.124 23:24:35 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@954 -- # kill -0 71760 00:06:56.124 23:24:35 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@955 -- # uname 00:06:56.124 23:24:35 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:56.124 23:24:35 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71760 00:06:56.124 23:24:35 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:56.124 23:24:35 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:56.124 killing process with pid 71760 00:06:56.124 23:24:35 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71760' 00:06:56.124 23:24:35 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@969 -- # kill 71760 00:06:56.124 [2024-09-30 23:24:35.887785] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:56.124 [2024-09-30 23:24:35.887836] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:56.124 [2024-09-30 23:24:35.887900] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:56.124 [2024-09-30 23:24:35.887910] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Raid, state offline 00:06:56.124 23:24:35 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@974 -- # wait 71760 00:06:56.383 [2024-09-30 23:24:36.192555] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:56.951 23:24:36 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@911 -- # return 0 00:06:56.951 00:06:56.951 real 0m2.388s 00:06:56.951 user 0m2.414s 00:06:56.951 sys 0m0.680s 00:06:56.951 23:24:36 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:56.951 23:24:36 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:56.951 ************************************ 00:06:56.951 END TEST raid1_resize_superblock_test 00:06:56.951 ************************************ 00:06:56.951 23:24:36 bdev_raid -- bdev/bdev_raid.sh@956 -- # uname -s 00:06:56.951 23:24:36 bdev_raid -- bdev/bdev_raid.sh@956 -- # '[' Linux = Linux ']' 00:06:56.951 23:24:36 bdev_raid -- bdev/bdev_raid.sh@956 -- # modprobe -n nbd 00:06:56.951 23:24:36 bdev_raid -- bdev/bdev_raid.sh@957 -- # has_nbd=true 00:06:56.951 23:24:36 bdev_raid -- bdev/bdev_raid.sh@958 -- # modprobe nbd 00:06:56.951 23:24:36 bdev_raid -- bdev/bdev_raid.sh@959 -- # run_test raid_function_test_raid0 raid_function_test raid0 00:06:56.951 23:24:36 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:06:56.951 23:24:36 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:56.951 23:24:36 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:56.951 ************************************ 00:06:56.951 START TEST raid_function_test_raid0 00:06:56.951 ************************************ 00:06:56.951 23:24:36 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@1125 -- # raid_function_test raid0 00:06:56.951 23:24:36 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@64 -- # local raid_level=raid0 00:06:56.951 23:24:36 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@65 -- # local nbd=/dev/nbd0 00:06:56.951 23:24:36 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@66 -- # local raid_bdev 00:06:56.951 23:24:36 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@69 -- # raid_pid=71842 00:06:56.951 23:24:36 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@68 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:06:56.951 Process raid pid: 71842 00:06:56.951 23:24:36 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@70 -- # echo 'Process raid pid: 71842' 00:06:56.951 23:24:36 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@71 -- # waitforlisten 71842 00:06:56.951 23:24:36 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@831 -- # '[' -z 71842 ']' 00:06:56.951 23:24:36 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:56.951 23:24:36 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:56.951 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:56.952 23:24:36 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:56.952 23:24:36 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:56.952 23:24:36 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:06:56.952 [2024-09-30 23:24:36.742644] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:06:56.952 [2024-09-30 23:24:36.742782] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:57.211 [2024-09-30 23:24:36.926404] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:57.211 [2024-09-30 23:24:36.970154] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:57.211 [2024-09-30 23:24:37.013227] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:57.211 [2024-09-30 23:24:37.013268] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:57.777 23:24:37 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:57.777 23:24:37 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@864 -- # return 0 00:06:57.777 23:24:37 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@73 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_1 00:06:57.777 23:24:37 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:57.777 23:24:37 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:06:57.777 Base_1 00:06:57.777 23:24:37 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:57.777 23:24:37 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@74 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_2 00:06:57.777 23:24:37 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:57.777 23:24:37 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:06:57.777 Base_2 00:06:57.777 23:24:37 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:57.777 23:24:37 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@75 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''Base_1 Base_2'\''' -n raid 00:06:57.777 23:24:37 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:57.777 23:24:37 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:06:57.777 [2024-09-30 23:24:37.612338] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:06:57.777 [2024-09-30 23:24:37.614244] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:06:57.777 [2024-09-30 23:24:37.614325] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:06:57.777 [2024-09-30 23:24:37.614338] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:06:57.777 [2024-09-30 23:24:37.614603] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:06:57.777 [2024-09-30 23:24:37.614720] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:06:57.777 [2024-09-30 23:24:37.614734] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x617000006280 00:06:57.777 [2024-09-30 23:24:37.614889] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:57.777 23:24:37 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:57.777 23:24:37 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # rpc_cmd bdev_raid_get_bdevs online 00:06:57.777 23:24:37 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:57.777 23:24:37 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # jq -r '.[0]["name"] | select(.)' 00:06:57.777 23:24:37 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:06:58.035 23:24:37 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:58.035 23:24:37 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # raid_bdev=raid 00:06:58.035 23:24:37 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@78 -- # '[' raid = '' ']' 00:06:58.035 23:24:37 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@83 -- # nbd_start_disks /var/tmp/spdk.sock raid /dev/nbd0 00:06:58.035 23:24:37 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:06:58.035 23:24:37 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:06:58.035 23:24:37 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:58.035 23:24:37 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:06:58.035 23:24:37 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:58.035 23:24:37 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@12 -- # local i 00:06:58.035 23:24:37 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:58.035 23:24:37 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:06:58.035 23:24:37 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid /dev/nbd0 00:06:58.035 [2024-09-30 23:24:37.859946] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:06:58.035 /dev/nbd0 00:06:58.035 23:24:37 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:58.293 23:24:37 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:58.293 23:24:37 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:06:58.293 23:24:37 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@869 -- # local i 00:06:58.293 23:24:37 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:58.294 23:24:37 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:58.294 23:24:37 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:06:58.294 23:24:37 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@873 -- # break 00:06:58.294 23:24:37 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:58.294 23:24:37 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:58.294 23:24:37 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:58.294 1+0 records in 00:06:58.294 1+0 records out 00:06:58.294 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000226037 s, 18.1 MB/s 00:06:58.294 23:24:37 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:58.294 23:24:37 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@886 -- # size=4096 00:06:58.294 23:24:37 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:58.294 23:24:37 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:58.294 23:24:37 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@889 -- # return 0 00:06:58.294 23:24:37 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:58.294 23:24:37 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:06:58.294 23:24:37 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@84 -- # nbd_get_count /var/tmp/spdk.sock 00:06:58.294 23:24:37 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:06:58.294 23:24:37 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:06:58.294 23:24:38 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:58.294 { 00:06:58.294 "nbd_device": "/dev/nbd0", 00:06:58.294 "bdev_name": "raid" 00:06:58.294 } 00:06:58.294 ]' 00:06:58.294 23:24:38 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:58.294 { 00:06:58.294 "nbd_device": "/dev/nbd0", 00:06:58.294 "bdev_name": "raid" 00:06:58.294 } 00:06:58.294 ]' 00:06:58.294 23:24:38 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:58.553 23:24:38 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:06:58.553 23:24:38 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:58.553 23:24:38 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:06:58.553 23:24:38 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # count=1 00:06:58.553 23:24:38 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@66 -- # echo 1 00:06:58.553 23:24:38 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@84 -- # count=1 00:06:58.553 23:24:38 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@85 -- # '[' 1 -ne 1 ']' 00:06:58.553 23:24:38 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@89 -- # raid_unmap_data_verify /dev/nbd0 00:06:58.554 23:24:38 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@17 -- # hash blkdiscard 00:06:58.554 23:24:38 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@18 -- # local nbd=/dev/nbd0 00:06:58.554 23:24:38 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@19 -- # local blksize 00:06:58.554 23:24:38 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # lsblk -o LOG-SEC /dev/nbd0 00:06:58.554 23:24:38 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # grep -v LOG-SEC 00:06:58.554 23:24:38 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # cut -d ' ' -f 5 00:06:58.554 23:24:38 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # blksize=512 00:06:58.554 23:24:38 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@21 -- # local rw_blk_num=4096 00:06:58.554 23:24:38 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@22 -- # local rw_len=2097152 00:06:58.554 23:24:38 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@23 -- # unmap_blk_offs=('0' '1028' '321') 00:06:58.554 23:24:38 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@23 -- # local unmap_blk_offs 00:06:58.554 23:24:38 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@24 -- # unmap_blk_nums=('128' '2035' '456') 00:06:58.554 23:24:38 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@24 -- # local unmap_blk_nums 00:06:58.554 23:24:38 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@25 -- # local unmap_off 00:06:58.554 23:24:38 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@26 -- # local unmap_len 00:06:58.554 23:24:38 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@29 -- # dd if=/dev/urandom of=/raidtest/raidrandtest bs=512 count=4096 00:06:58.554 4096+0 records in 00:06:58.554 4096+0 records out 00:06:58.554 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0252229 s, 83.1 MB/s 00:06:58.554 23:24:38 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@30 -- # dd if=/raidtest/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:06:58.814 4096+0 records in 00:06:58.814 4096+0 records out 00:06:58.814 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.188029 s, 11.2 MB/s 00:06:58.814 23:24:38 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@31 -- # blockdev --flushbufs /dev/nbd0 00:06:58.814 23:24:38 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@34 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:06:58.814 23:24:38 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i = 0 )) 00:06:58.814 23:24:38 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:06:58.814 23:24:38 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=0 00:06:58.814 23:24:38 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=65536 00:06:58.814 23:24:38 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:06:58.814 128+0 records in 00:06:58.814 128+0 records out 00:06:58.814 65536 bytes (66 kB, 64 KiB) copied, 0.00124907 s, 52.5 MB/s 00:06:58.814 23:24:38 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:06:58.814 23:24:38 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:06:58.814 23:24:38 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:06:58.814 23:24:38 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:06:58.814 23:24:38 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:06:58.814 23:24:38 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=526336 00:06:58.814 23:24:38 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=1041920 00:06:58.814 23:24:38 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:06:58.814 2035+0 records in 00:06:58.814 2035+0 records out 00:06:58.814 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.0148947 s, 70.0 MB/s 00:06:58.814 23:24:38 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:06:58.814 23:24:38 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:06:58.814 23:24:38 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:06:58.814 23:24:38 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:06:58.814 23:24:38 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:06:58.814 23:24:38 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=164352 00:06:58.814 23:24:38 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=233472 00:06:58.814 23:24:38 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:06:58.814 456+0 records in 00:06:58.814 456+0 records out 00:06:58.814 233472 bytes (233 kB, 228 KiB) copied, 0.00265583 s, 87.9 MB/s 00:06:58.814 23:24:38 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:06:58.814 23:24:38 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:06:58.814 23:24:38 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:06:58.814 23:24:38 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:06:58.814 23:24:38 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:06:58.814 23:24:38 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@52 -- # return 0 00:06:58.814 23:24:38 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@91 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:06:58.814 23:24:38 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:06:58.814 23:24:38 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:06:58.814 23:24:38 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:58.814 23:24:38 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@51 -- # local i 00:06:58.814 23:24:38 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:58.814 23:24:38 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:06:59.074 23:24:38 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:59.074 [2024-09-30 23:24:38.737660] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:59.074 23:24:38 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:59.074 23:24:38 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:59.074 23:24:38 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:59.074 23:24:38 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:59.074 23:24:38 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:59.074 23:24:38 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@41 -- # break 00:06:59.074 23:24:38 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@45 -- # return 0 00:06:59.074 23:24:38 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@92 -- # nbd_get_count /var/tmp/spdk.sock 00:06:59.074 23:24:38 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:06:59.074 23:24:38 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:06:59.333 23:24:38 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:59.333 23:24:38 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:59.333 23:24:38 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:59.333 23:24:39 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:59.333 23:24:39 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # echo '' 00:06:59.333 23:24:39 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:59.333 23:24:39 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # true 00:06:59.333 23:24:39 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # count=0 00:06:59.333 23:24:39 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@66 -- # echo 0 00:06:59.333 23:24:39 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@92 -- # count=0 00:06:59.333 23:24:39 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@93 -- # '[' 0 -ne 0 ']' 00:06:59.333 23:24:39 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@97 -- # killprocess 71842 00:06:59.333 23:24:39 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@950 -- # '[' -z 71842 ']' 00:06:59.333 23:24:39 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@954 -- # kill -0 71842 00:06:59.333 23:24:39 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@955 -- # uname 00:06:59.333 23:24:39 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:59.333 23:24:39 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71842 00:06:59.333 killing process with pid 71842 00:06:59.333 23:24:39 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:59.333 23:24:39 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:59.333 23:24:39 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71842' 00:06:59.333 23:24:39 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@969 -- # kill 71842 00:06:59.333 [2024-09-30 23:24:39.060270] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:59.333 [2024-09-30 23:24:39.060390] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:59.333 [2024-09-30 23:24:39.060441] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:59.333 23:24:39 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@974 -- # wait 71842 00:06:59.333 [2024-09-30 23:24:39.060454] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid, state offline 00:06:59.333 [2024-09-30 23:24:39.082997] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:59.593 23:24:39 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@99 -- # return 0 00:06:59.593 00:06:59.593 real 0m2.667s 00:06:59.593 user 0m3.283s 00:06:59.593 sys 0m0.917s 00:06:59.593 23:24:39 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:59.593 ************************************ 00:06:59.593 END TEST raid_function_test_raid0 00:06:59.593 23:24:39 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:06:59.593 ************************************ 00:06:59.593 23:24:39 bdev_raid -- bdev/bdev_raid.sh@960 -- # run_test raid_function_test_concat raid_function_test concat 00:06:59.593 23:24:39 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:06:59.593 23:24:39 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:59.593 23:24:39 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:59.593 ************************************ 00:06:59.593 START TEST raid_function_test_concat 00:06:59.593 ************************************ 00:06:59.593 23:24:39 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@1125 -- # raid_function_test concat 00:06:59.593 23:24:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@64 -- # local raid_level=concat 00:06:59.593 23:24:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@65 -- # local nbd=/dev/nbd0 00:06:59.593 23:24:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@66 -- # local raid_bdev 00:06:59.593 23:24:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@69 -- # raid_pid=71954 00:06:59.593 Process raid pid: 71954 00:06:59.593 23:24:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@68 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:06:59.593 23:24:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@70 -- # echo 'Process raid pid: 71954' 00:06:59.593 23:24:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@71 -- # waitforlisten 71954 00:06:59.593 23:24:39 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@831 -- # '[' -z 71954 ']' 00:06:59.593 23:24:39 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:59.593 23:24:39 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:59.593 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:59.593 23:24:39 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:59.593 23:24:39 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:59.593 23:24:39 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:06:59.852 [2024-09-30 23:24:39.470746] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:06:59.853 [2024-09-30 23:24:39.470875] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:59.853 [2024-09-30 23:24:39.632120] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:59.853 [2024-09-30 23:24:39.676833] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:00.112 [2024-09-30 23:24:39.719277] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:00.112 [2024-09-30 23:24:39.719309] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:00.695 23:24:40 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:00.695 23:24:40 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@864 -- # return 0 00:07:00.695 23:24:40 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@73 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_1 00:07:00.695 23:24:40 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:00.695 23:24:40 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:07:00.695 Base_1 00:07:00.695 23:24:40 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:00.695 23:24:40 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@74 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_2 00:07:00.695 23:24:40 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:00.695 23:24:40 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:07:00.695 Base_2 00:07:00.695 23:24:40 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:00.695 23:24:40 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@75 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''Base_1 Base_2'\''' -n raid 00:07:00.695 23:24:40 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:00.695 23:24:40 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:07:00.695 [2024-09-30 23:24:40.366614] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:07:00.695 [2024-09-30 23:24:40.368433] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:07:00.695 [2024-09-30 23:24:40.368502] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:07:00.695 [2024-09-30 23:24:40.368513] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:07:00.695 [2024-09-30 23:24:40.368767] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:07:00.695 [2024-09-30 23:24:40.368922] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:07:00.695 [2024-09-30 23:24:40.368944] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x617000006280 00:07:00.695 [2024-09-30 23:24:40.369092] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:00.695 23:24:40 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:00.695 23:24:40 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # rpc_cmd bdev_raid_get_bdevs online 00:07:00.695 23:24:40 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # jq -r '.[0]["name"] | select(.)' 00:07:00.695 23:24:40 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:00.695 23:24:40 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:07:00.695 23:24:40 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:00.695 23:24:40 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # raid_bdev=raid 00:07:00.695 23:24:40 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@78 -- # '[' raid = '' ']' 00:07:00.695 23:24:40 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@83 -- # nbd_start_disks /var/tmp/spdk.sock raid /dev/nbd0 00:07:00.695 23:24:40 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:07:00.695 23:24:40 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:07:00.695 23:24:40 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:00.695 23:24:40 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:07:00.695 23:24:40 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:00.695 23:24:40 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@12 -- # local i 00:07:00.695 23:24:40 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:00.695 23:24:40 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:07:00.695 23:24:40 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid /dev/nbd0 00:07:00.955 [2024-09-30 23:24:40.606356] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:00.955 /dev/nbd0 00:07:00.955 23:24:40 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:00.955 23:24:40 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:00.955 23:24:40 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:07:00.955 23:24:40 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@869 -- # local i 00:07:00.955 23:24:40 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:00.955 23:24:40 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:00.955 23:24:40 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:07:00.955 23:24:40 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@873 -- # break 00:07:00.955 23:24:40 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:00.955 23:24:40 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:00.955 23:24:40 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:00.955 1+0 records in 00:07:00.955 1+0 records out 00:07:00.955 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000435915 s, 9.4 MB/s 00:07:00.955 23:24:40 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:00.955 23:24:40 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@886 -- # size=4096 00:07:00.955 23:24:40 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:00.955 23:24:40 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:00.955 23:24:40 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@889 -- # return 0 00:07:00.955 23:24:40 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:00.955 23:24:40 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:07:00.955 23:24:40 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@84 -- # nbd_get_count /var/tmp/spdk.sock 00:07:00.955 23:24:40 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:07:00.955 23:24:40 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:07:01.214 23:24:40 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:01.214 { 00:07:01.214 "nbd_device": "/dev/nbd0", 00:07:01.214 "bdev_name": "raid" 00:07:01.214 } 00:07:01.214 ]' 00:07:01.214 23:24:40 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:01.214 { 00:07:01.214 "nbd_device": "/dev/nbd0", 00:07:01.214 "bdev_name": "raid" 00:07:01.214 } 00:07:01.214 ]' 00:07:01.214 23:24:40 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:01.214 23:24:40 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:07:01.214 23:24:40 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:01.214 23:24:40 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:07:01.214 23:24:40 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # count=1 00:07:01.214 23:24:40 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@66 -- # echo 1 00:07:01.214 23:24:40 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@84 -- # count=1 00:07:01.214 23:24:40 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@85 -- # '[' 1 -ne 1 ']' 00:07:01.214 23:24:40 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@89 -- # raid_unmap_data_verify /dev/nbd0 00:07:01.214 23:24:40 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@17 -- # hash blkdiscard 00:07:01.214 23:24:40 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@18 -- # local nbd=/dev/nbd0 00:07:01.214 23:24:40 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@19 -- # local blksize 00:07:01.214 23:24:40 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # lsblk -o LOG-SEC /dev/nbd0 00:07:01.214 23:24:40 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # cut -d ' ' -f 5 00:07:01.214 23:24:40 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # grep -v LOG-SEC 00:07:01.214 23:24:40 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # blksize=512 00:07:01.214 23:24:40 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@21 -- # local rw_blk_num=4096 00:07:01.214 23:24:40 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@22 -- # local rw_len=2097152 00:07:01.214 23:24:40 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@23 -- # unmap_blk_offs=('0' '1028' '321') 00:07:01.214 23:24:40 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@23 -- # local unmap_blk_offs 00:07:01.214 23:24:40 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@24 -- # unmap_blk_nums=('128' '2035' '456') 00:07:01.214 23:24:40 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@24 -- # local unmap_blk_nums 00:07:01.214 23:24:40 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@25 -- # local unmap_off 00:07:01.214 23:24:40 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@26 -- # local unmap_len 00:07:01.214 23:24:40 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@29 -- # dd if=/dev/urandom of=/raidtest/raidrandtest bs=512 count=4096 00:07:01.214 4096+0 records in 00:07:01.214 4096+0 records out 00:07:01.214 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0348778 s, 60.1 MB/s 00:07:01.214 23:24:40 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@30 -- # dd if=/raidtest/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:07:01.472 4096+0 records in 00:07:01.472 4096+0 records out 00:07:01.472 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.18625 s, 11.3 MB/s 00:07:01.473 23:24:41 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@31 -- # blockdev --flushbufs /dev/nbd0 00:07:01.473 23:24:41 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@34 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:01.473 23:24:41 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i = 0 )) 00:07:01.473 23:24:41 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:01.473 23:24:41 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=0 00:07:01.473 23:24:41 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=65536 00:07:01.473 23:24:41 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:07:01.473 128+0 records in 00:07:01.473 128+0 records out 00:07:01.473 65536 bytes (66 kB, 64 KiB) copied, 0.00109999 s, 59.6 MB/s 00:07:01.473 23:24:41 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:07:01.473 23:24:41 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:07:01.473 23:24:41 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:01.473 23:24:41 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:07:01.473 23:24:41 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:01.473 23:24:41 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=526336 00:07:01.473 23:24:41 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=1041920 00:07:01.473 23:24:41 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:07:01.473 2035+0 records in 00:07:01.473 2035+0 records out 00:07:01.473 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.012572 s, 82.9 MB/s 00:07:01.473 23:24:41 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:07:01.473 23:24:41 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:07:01.473 23:24:41 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:01.473 23:24:41 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:07:01.473 23:24:41 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:01.473 23:24:41 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=164352 00:07:01.473 23:24:41 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=233472 00:07:01.473 23:24:41 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:07:01.473 456+0 records in 00:07:01.473 456+0 records out 00:07:01.473 233472 bytes (233 kB, 228 KiB) copied, 0.00232455 s, 100 MB/s 00:07:01.473 23:24:41 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:07:01.473 23:24:41 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:07:01.473 23:24:41 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:01.473 23:24:41 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:07:01.473 23:24:41 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:01.473 23:24:41 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@52 -- # return 0 00:07:01.473 23:24:41 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@91 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:07:01.473 23:24:41 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:07:01.473 23:24:41 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:07:01.473 23:24:41 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:01.473 23:24:41 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@51 -- # local i 00:07:01.473 23:24:41 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:01.473 23:24:41 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:07:01.732 23:24:41 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:01.732 23:24:41 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:01.732 23:24:41 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:01.732 23:24:41 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:01.732 23:24:41 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:01.732 23:24:41 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:01.732 [2024-09-30 23:24:41.476368] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:01.732 23:24:41 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@41 -- # break 00:07:01.732 23:24:41 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@45 -- # return 0 00:07:01.732 23:24:41 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@92 -- # nbd_get_count /var/tmp/spdk.sock 00:07:01.732 23:24:41 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:07:01.732 23:24:41 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:07:01.992 23:24:41 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:01.992 23:24:41 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:01.992 23:24:41 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:01.992 23:24:41 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:01.992 23:24:41 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:01.992 23:24:41 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:01.992 23:24:41 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # true 00:07:01.992 23:24:41 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # count=0 00:07:01.992 23:24:41 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:01.992 23:24:41 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@92 -- # count=0 00:07:01.992 23:24:41 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@93 -- # '[' 0 -ne 0 ']' 00:07:01.992 23:24:41 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@97 -- # killprocess 71954 00:07:01.992 23:24:41 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@950 -- # '[' -z 71954 ']' 00:07:01.992 23:24:41 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@954 -- # kill -0 71954 00:07:01.992 23:24:41 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@955 -- # uname 00:07:01.992 23:24:41 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:01.992 23:24:41 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71954 00:07:01.992 23:24:41 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:01.992 23:24:41 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:01.992 killing process with pid 71954 00:07:01.992 23:24:41 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71954' 00:07:01.992 23:24:41 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@969 -- # kill 71954 00:07:01.992 [2024-09-30 23:24:41.779648] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:01.992 [2024-09-30 23:24:41.779792] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:01.992 23:24:41 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@974 -- # wait 71954 00:07:01.992 [2024-09-30 23:24:41.779856] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:01.992 [2024-09-30 23:24:41.779894] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid, state offline 00:07:01.992 [2024-09-30 23:24:41.803886] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:02.251 23:24:42 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@99 -- # return 0 00:07:02.251 00:07:02.251 real 0m2.653s 00:07:02.251 user 0m3.259s 00:07:02.251 sys 0m0.925s 00:07:02.251 23:24:42 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:02.251 23:24:42 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:07:02.251 ************************************ 00:07:02.251 END TEST raid_function_test_concat 00:07:02.251 ************************************ 00:07:02.251 23:24:42 bdev_raid -- bdev/bdev_raid.sh@963 -- # run_test raid0_resize_test raid_resize_test 0 00:07:02.251 23:24:42 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:02.251 23:24:42 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:02.251 23:24:42 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:02.510 ************************************ 00:07:02.510 START TEST raid0_resize_test 00:07:02.510 ************************************ 00:07:02.510 23:24:42 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1125 -- # raid_resize_test 0 00:07:02.510 23:24:42 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@332 -- # local raid_level=0 00:07:02.510 23:24:42 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@333 -- # local blksize=512 00:07:02.510 23:24:42 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@334 -- # local bdev_size_mb=32 00:07:02.510 23:24:42 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@335 -- # local new_bdev_size_mb=64 00:07:02.510 23:24:42 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@336 -- # local blkcnt 00:07:02.510 23:24:42 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@337 -- # local raid_size_mb 00:07:02.510 23:24:42 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@338 -- # local new_raid_size_mb 00:07:02.510 23:24:42 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@339 -- # local expected_size 00:07:02.510 23:24:42 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@342 -- # raid_pid=72066 00:07:02.510 Process raid pid: 72066 00:07:02.510 23:24:42 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@341 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:02.510 23:24:42 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@343 -- # echo 'Process raid pid: 72066' 00:07:02.510 23:24:42 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@344 -- # waitforlisten 72066 00:07:02.510 23:24:42 bdev_raid.raid0_resize_test -- common/autotest_common.sh@831 -- # '[' -z 72066 ']' 00:07:02.510 23:24:42 bdev_raid.raid0_resize_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:02.510 23:24:42 bdev_raid.raid0_resize_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:02.510 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:02.510 23:24:42 bdev_raid.raid0_resize_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:02.510 23:24:42 bdev_raid.raid0_resize_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:02.510 23:24:42 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:02.510 [2024-09-30 23:24:42.194016] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:07:02.510 [2024-09-30 23:24:42.194575] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:02.510 [2024-09-30 23:24:42.352491] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:02.770 [2024-09-30 23:24:42.398953] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:02.770 [2024-09-30 23:24:42.441484] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:02.770 [2024-09-30 23:24:42.441527] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:03.340 23:24:43 bdev_raid.raid0_resize_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:03.340 23:24:43 bdev_raid.raid0_resize_test -- common/autotest_common.sh@864 -- # return 0 00:07:03.340 23:24:43 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@346 -- # rpc_cmd bdev_null_create Base_1 32 512 00:07:03.340 23:24:43 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:03.340 23:24:43 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:03.340 Base_1 00:07:03.340 23:24:43 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:03.340 23:24:43 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@347 -- # rpc_cmd bdev_null_create Base_2 32 512 00:07:03.340 23:24:43 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:03.340 23:24:43 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:03.340 Base_2 00:07:03.340 23:24:43 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:03.340 23:24:43 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@349 -- # '[' 0 -eq 0 ']' 00:07:03.340 23:24:43 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@350 -- # rpc_cmd bdev_raid_create -z 64 -r 0 -b ''\''Base_1 Base_2'\''' -n Raid 00:07:03.340 23:24:43 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:03.340 23:24:43 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:03.340 [2024-09-30 23:24:43.046891] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:07:03.340 [2024-09-30 23:24:43.048644] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:07:03.340 [2024-09-30 23:24:43.048727] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:07:03.340 [2024-09-30 23:24:43.048737] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:07:03.340 [2024-09-30 23:24:43.048991] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005a00 00:07:03.340 [2024-09-30 23:24:43.049095] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:07:03.340 [2024-09-30 23:24:43.049112] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000006280 00:07:03.340 [2024-09-30 23:24:43.049242] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:03.340 23:24:43 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:03.340 23:24:43 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@356 -- # rpc_cmd bdev_null_resize Base_1 64 00:07:03.340 23:24:43 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:03.340 23:24:43 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:03.341 [2024-09-30 23:24:43.058810] bdev_raid.c:2313:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:07:03.341 [2024-09-30 23:24:43.058837] bdev_raid.c:2326:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:07:03.341 true 00:07:03.341 23:24:43 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:03.341 23:24:43 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # jq '.[].num_blocks' 00:07:03.341 23:24:43 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:03.341 23:24:43 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:03.341 23:24:43 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:03.341 [2024-09-30 23:24:43.074986] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:03.341 23:24:43 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:03.341 23:24:43 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # blkcnt=131072 00:07:03.341 23:24:43 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@360 -- # raid_size_mb=64 00:07:03.341 23:24:43 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@361 -- # '[' 0 -eq 0 ']' 00:07:03.341 23:24:43 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@362 -- # expected_size=64 00:07:03.341 23:24:43 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@366 -- # '[' 64 '!=' 64 ']' 00:07:03.341 23:24:43 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@372 -- # rpc_cmd bdev_null_resize Base_2 64 00:07:03.341 23:24:43 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:03.341 23:24:43 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:03.341 [2024-09-30 23:24:43.110721] bdev_raid.c:2313:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:07:03.341 [2024-09-30 23:24:43.110741] bdev_raid.c:2326:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:07:03.341 [2024-09-30 23:24:43.110769] bdev_raid.c:2340:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 131072 to 262144 00:07:03.341 true 00:07:03.341 23:24:43 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:03.341 23:24:43 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:03.341 23:24:43 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # jq '.[].num_blocks' 00:07:03.341 23:24:43 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:03.341 23:24:43 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:03.341 [2024-09-30 23:24:43.126888] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:03.341 23:24:43 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:03.341 23:24:43 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # blkcnt=262144 00:07:03.341 23:24:43 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@376 -- # raid_size_mb=128 00:07:03.341 23:24:43 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@377 -- # '[' 0 -eq 0 ']' 00:07:03.341 23:24:43 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@378 -- # expected_size=128 00:07:03.341 23:24:43 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@382 -- # '[' 128 '!=' 128 ']' 00:07:03.341 23:24:43 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@387 -- # killprocess 72066 00:07:03.341 23:24:43 bdev_raid.raid0_resize_test -- common/autotest_common.sh@950 -- # '[' -z 72066 ']' 00:07:03.341 23:24:43 bdev_raid.raid0_resize_test -- common/autotest_common.sh@954 -- # kill -0 72066 00:07:03.341 23:24:43 bdev_raid.raid0_resize_test -- common/autotest_common.sh@955 -- # uname 00:07:03.341 23:24:43 bdev_raid.raid0_resize_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:03.341 23:24:43 bdev_raid.raid0_resize_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72066 00:07:03.601 23:24:43 bdev_raid.raid0_resize_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:03.601 killing process with pid 72066 00:07:03.601 23:24:43 bdev_raid.raid0_resize_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:03.601 23:24:43 bdev_raid.raid0_resize_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72066' 00:07:03.601 23:24:43 bdev_raid.raid0_resize_test -- common/autotest_common.sh@969 -- # kill 72066 00:07:03.601 [2024-09-30 23:24:43.209393] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:03.601 [2024-09-30 23:24:43.209468] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:03.601 [2024-09-30 23:24:43.209518] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:03.601 [2024-09-30 23:24:43.209528] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Raid, state offline 00:07:03.601 23:24:43 bdev_raid.raid0_resize_test -- common/autotest_common.sh@974 -- # wait 72066 00:07:03.601 [2024-09-30 23:24:43.211112] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:03.601 23:24:43 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@389 -- # return 0 00:07:03.601 00:07:03.601 real 0m1.341s 00:07:03.601 user 0m1.483s 00:07:03.601 sys 0m0.316s 00:07:03.601 23:24:43 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:03.601 23:24:43 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:03.861 ************************************ 00:07:03.861 END TEST raid0_resize_test 00:07:03.861 ************************************ 00:07:03.861 23:24:43 bdev_raid -- bdev/bdev_raid.sh@964 -- # run_test raid1_resize_test raid_resize_test 1 00:07:03.861 23:24:43 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:03.861 23:24:43 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:03.861 23:24:43 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:03.861 ************************************ 00:07:03.861 START TEST raid1_resize_test 00:07:03.861 ************************************ 00:07:03.861 23:24:43 bdev_raid.raid1_resize_test -- common/autotest_common.sh@1125 -- # raid_resize_test 1 00:07:03.861 23:24:43 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@332 -- # local raid_level=1 00:07:03.861 23:24:43 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@333 -- # local blksize=512 00:07:03.861 23:24:43 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@334 -- # local bdev_size_mb=32 00:07:03.861 23:24:43 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@335 -- # local new_bdev_size_mb=64 00:07:03.861 23:24:43 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@336 -- # local blkcnt 00:07:03.861 23:24:43 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@337 -- # local raid_size_mb 00:07:03.861 23:24:43 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@338 -- # local new_raid_size_mb 00:07:03.861 23:24:43 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@339 -- # local expected_size 00:07:03.861 23:24:43 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@342 -- # raid_pid=72116 00:07:03.861 Process raid pid: 72116 00:07:03.861 23:24:43 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@341 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:03.861 23:24:43 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@343 -- # echo 'Process raid pid: 72116' 00:07:03.861 23:24:43 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@344 -- # waitforlisten 72116 00:07:03.861 23:24:43 bdev_raid.raid1_resize_test -- common/autotest_common.sh@831 -- # '[' -z 72116 ']' 00:07:03.861 23:24:43 bdev_raid.raid1_resize_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:03.861 23:24:43 bdev_raid.raid1_resize_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:03.861 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:03.861 23:24:43 bdev_raid.raid1_resize_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:03.861 23:24:43 bdev_raid.raid1_resize_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:03.861 23:24:43 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:03.861 [2024-09-30 23:24:43.611570] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:07:03.861 [2024-09-30 23:24:43.611709] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:04.121 [2024-09-30 23:24:43.765819] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:04.121 [2024-09-30 23:24:43.810951] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:04.121 [2024-09-30 23:24:43.853591] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:04.121 [2024-09-30 23:24:43.853652] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:04.692 23:24:44 bdev_raid.raid1_resize_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:04.692 23:24:44 bdev_raid.raid1_resize_test -- common/autotest_common.sh@864 -- # return 0 00:07:04.692 23:24:44 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@346 -- # rpc_cmd bdev_null_create Base_1 32 512 00:07:04.692 23:24:44 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:04.692 23:24:44 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:04.692 Base_1 00:07:04.692 23:24:44 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:04.692 23:24:44 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@347 -- # rpc_cmd bdev_null_create Base_2 32 512 00:07:04.692 23:24:44 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:04.692 23:24:44 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:04.692 Base_2 00:07:04.692 23:24:44 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:04.692 23:24:44 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@349 -- # '[' 1 -eq 0 ']' 00:07:04.692 23:24:44 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@352 -- # rpc_cmd bdev_raid_create -r 1 -b ''\''Base_1 Base_2'\''' -n Raid 00:07:04.692 23:24:44 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:04.692 23:24:44 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:04.692 [2024-09-30 23:24:44.446878] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:07:04.692 [2024-09-30 23:24:44.448602] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:07:04.692 [2024-09-30 23:24:44.448666] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:07:04.692 [2024-09-30 23:24:44.448678] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:07:04.692 [2024-09-30 23:24:44.448923] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005a00 00:07:04.693 [2024-09-30 23:24:44.449030] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:07:04.693 [2024-09-30 23:24:44.449045] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000006280 00:07:04.693 [2024-09-30 23:24:44.449148] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:04.693 23:24:44 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:04.693 23:24:44 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@356 -- # rpc_cmd bdev_null_resize Base_1 64 00:07:04.693 23:24:44 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:04.693 23:24:44 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:04.693 [2024-09-30 23:24:44.458816] bdev_raid.c:2313:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:07:04.693 [2024-09-30 23:24:44.458850] bdev_raid.c:2326:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:07:04.693 true 00:07:04.693 23:24:44 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:04.693 23:24:44 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:04.693 23:24:44 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:04.693 23:24:44 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # jq '.[].num_blocks' 00:07:04.693 23:24:44 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:04.693 [2024-09-30 23:24:44.474994] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:04.693 23:24:44 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:04.693 23:24:44 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # blkcnt=65536 00:07:04.693 23:24:44 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@360 -- # raid_size_mb=32 00:07:04.693 23:24:44 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@361 -- # '[' 1 -eq 0 ']' 00:07:04.693 23:24:44 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@364 -- # expected_size=32 00:07:04.693 23:24:44 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@366 -- # '[' 32 '!=' 32 ']' 00:07:04.693 23:24:44 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@372 -- # rpc_cmd bdev_null_resize Base_2 64 00:07:04.693 23:24:44 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:04.693 23:24:44 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:04.693 [2024-09-30 23:24:44.518721] bdev_raid.c:2313:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:07:04.693 [2024-09-30 23:24:44.518747] bdev_raid.c:2326:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:07:04.693 [2024-09-30 23:24:44.518770] bdev_raid.c:2340:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 65536 to 131072 00:07:04.693 true 00:07:04.693 23:24:44 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:04.693 23:24:44 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # jq '.[].num_blocks' 00:07:04.693 23:24:44 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:04.693 23:24:44 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:04.693 23:24:44 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:04.693 [2024-09-30 23:24:44.534848] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:04.953 23:24:44 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:04.953 23:24:44 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # blkcnt=131072 00:07:04.953 23:24:44 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@376 -- # raid_size_mb=64 00:07:04.953 23:24:44 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@377 -- # '[' 1 -eq 0 ']' 00:07:04.953 23:24:44 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@380 -- # expected_size=64 00:07:04.953 23:24:44 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@382 -- # '[' 64 '!=' 64 ']' 00:07:04.953 23:24:44 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@387 -- # killprocess 72116 00:07:04.953 23:24:44 bdev_raid.raid1_resize_test -- common/autotest_common.sh@950 -- # '[' -z 72116 ']' 00:07:04.953 23:24:44 bdev_raid.raid1_resize_test -- common/autotest_common.sh@954 -- # kill -0 72116 00:07:04.953 23:24:44 bdev_raid.raid1_resize_test -- common/autotest_common.sh@955 -- # uname 00:07:04.953 23:24:44 bdev_raid.raid1_resize_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:04.953 23:24:44 bdev_raid.raid1_resize_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72116 00:07:04.953 23:24:44 bdev_raid.raid1_resize_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:04.953 23:24:44 bdev_raid.raid1_resize_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:04.953 killing process with pid 72116 00:07:04.953 23:24:44 bdev_raid.raid1_resize_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72116' 00:07:04.953 23:24:44 bdev_raid.raid1_resize_test -- common/autotest_common.sh@969 -- # kill 72116 00:07:04.953 [2024-09-30 23:24:44.597141] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:04.953 [2024-09-30 23:24:44.597209] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:04.953 23:24:44 bdev_raid.raid1_resize_test -- common/autotest_common.sh@974 -- # wait 72116 00:07:04.953 [2024-09-30 23:24:44.597593] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:04.953 [2024-09-30 23:24:44.597621] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Raid, state offline 00:07:04.953 [2024-09-30 23:24:44.598742] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:05.212 23:24:44 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@389 -- # return 0 00:07:05.212 00:07:05.212 real 0m1.320s 00:07:05.212 user 0m1.453s 00:07:05.212 sys 0m0.308s 00:07:05.212 23:24:44 bdev_raid.raid1_resize_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:05.212 23:24:44 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:05.212 ************************************ 00:07:05.212 END TEST raid1_resize_test 00:07:05.212 ************************************ 00:07:05.212 23:24:44 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:07:05.212 23:24:44 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:07:05.212 23:24:44 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 2 false 00:07:05.212 23:24:44 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:07:05.212 23:24:44 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:05.212 23:24:44 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:05.212 ************************************ 00:07:05.212 START TEST raid_state_function_test 00:07:05.212 ************************************ 00:07:05.212 23:24:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test raid0 2 false 00:07:05.212 23:24:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:07:05.212 23:24:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:07:05.212 23:24:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:07:05.212 23:24:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:07:05.212 23:24:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:07:05.212 23:24:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:05.212 23:24:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:07:05.212 23:24:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:05.212 23:24:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:05.212 23:24:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:07:05.212 23:24:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:05.212 23:24:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:05.212 23:24:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:05.212 23:24:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:07:05.212 23:24:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:07:05.212 23:24:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:07:05.212 23:24:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:07:05.213 23:24:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:07:05.213 23:24:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:07:05.213 23:24:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:07:05.213 23:24:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:07:05.213 23:24:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:07:05.213 23:24:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:07:05.213 23:24:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=72168 00:07:05.213 23:24:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:05.213 23:24:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 72168' 00:07:05.213 Process raid pid: 72168 00:07:05.213 23:24:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 72168 00:07:05.213 23:24:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 72168 ']' 00:07:05.213 23:24:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:05.213 23:24:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:05.213 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:05.213 23:24:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:05.213 23:24:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:05.213 23:24:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:05.213 [2024-09-30 23:24:44.997774] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:07:05.213 [2024-09-30 23:24:44.997916] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:05.472 [2024-09-30 23:24:45.160066] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:05.472 [2024-09-30 23:24:45.205929] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:05.472 [2024-09-30 23:24:45.248624] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:05.472 [2024-09-30 23:24:45.248667] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:06.041 23:24:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:06.041 23:24:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:07:06.041 23:24:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:06.041 23:24:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:06.041 23:24:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:06.041 [2024-09-30 23:24:45.834023] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:06.041 [2024-09-30 23:24:45.834080] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:06.041 [2024-09-30 23:24:45.834099] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:06.041 [2024-09-30 23:24:45.834109] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:06.041 23:24:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:06.041 23:24:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:07:06.041 23:24:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:06.041 23:24:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:06.041 23:24:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:06.041 23:24:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:06.041 23:24:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:06.041 23:24:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:06.041 23:24:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:06.041 23:24:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:06.041 23:24:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:06.041 23:24:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:06.041 23:24:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:06.041 23:24:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:06.041 23:24:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:06.041 23:24:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:06.041 23:24:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:06.041 "name": "Existed_Raid", 00:07:06.041 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:06.041 "strip_size_kb": 64, 00:07:06.041 "state": "configuring", 00:07:06.041 "raid_level": "raid0", 00:07:06.041 "superblock": false, 00:07:06.041 "num_base_bdevs": 2, 00:07:06.041 "num_base_bdevs_discovered": 0, 00:07:06.041 "num_base_bdevs_operational": 2, 00:07:06.041 "base_bdevs_list": [ 00:07:06.041 { 00:07:06.041 "name": "BaseBdev1", 00:07:06.041 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:06.041 "is_configured": false, 00:07:06.041 "data_offset": 0, 00:07:06.041 "data_size": 0 00:07:06.041 }, 00:07:06.041 { 00:07:06.041 "name": "BaseBdev2", 00:07:06.041 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:06.041 "is_configured": false, 00:07:06.041 "data_offset": 0, 00:07:06.041 "data_size": 0 00:07:06.041 } 00:07:06.041 ] 00:07:06.041 }' 00:07:06.041 23:24:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:06.041 23:24:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:06.611 23:24:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:06.611 23:24:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:06.611 23:24:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:06.611 [2024-09-30 23:24:46.285136] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:06.611 [2024-09-30 23:24:46.285183] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring 00:07:06.611 23:24:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:06.611 23:24:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:06.611 23:24:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:06.611 23:24:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:06.611 [2024-09-30 23:24:46.297145] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:06.611 [2024-09-30 23:24:46.297187] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:06.612 [2024-09-30 23:24:46.297212] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:06.612 [2024-09-30 23:24:46.297221] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:06.612 23:24:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:06.612 23:24:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:07:06.612 23:24:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:06.612 23:24:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:06.612 [2024-09-30 23:24:46.318012] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:06.612 BaseBdev1 00:07:06.612 23:24:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:06.612 23:24:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:07:06.612 23:24:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:07:06.612 23:24:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:07:06.612 23:24:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:07:06.612 23:24:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:07:06.612 23:24:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:07:06.612 23:24:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:07:06.612 23:24:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:06.612 23:24:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:06.612 23:24:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:06.612 23:24:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:06.612 23:24:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:06.612 23:24:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:06.612 [ 00:07:06.612 { 00:07:06.612 "name": "BaseBdev1", 00:07:06.612 "aliases": [ 00:07:06.612 "7b608798-ea2b-45dd-bcf6-9d5f66995cd3" 00:07:06.612 ], 00:07:06.612 "product_name": "Malloc disk", 00:07:06.612 "block_size": 512, 00:07:06.612 "num_blocks": 65536, 00:07:06.612 "uuid": "7b608798-ea2b-45dd-bcf6-9d5f66995cd3", 00:07:06.612 "assigned_rate_limits": { 00:07:06.612 "rw_ios_per_sec": 0, 00:07:06.612 "rw_mbytes_per_sec": 0, 00:07:06.612 "r_mbytes_per_sec": 0, 00:07:06.612 "w_mbytes_per_sec": 0 00:07:06.612 }, 00:07:06.612 "claimed": true, 00:07:06.612 "claim_type": "exclusive_write", 00:07:06.612 "zoned": false, 00:07:06.612 "supported_io_types": { 00:07:06.612 "read": true, 00:07:06.612 "write": true, 00:07:06.612 "unmap": true, 00:07:06.612 "flush": true, 00:07:06.612 "reset": true, 00:07:06.612 "nvme_admin": false, 00:07:06.612 "nvme_io": false, 00:07:06.612 "nvme_io_md": false, 00:07:06.612 "write_zeroes": true, 00:07:06.612 "zcopy": true, 00:07:06.612 "get_zone_info": false, 00:07:06.612 "zone_management": false, 00:07:06.612 "zone_append": false, 00:07:06.612 "compare": false, 00:07:06.612 "compare_and_write": false, 00:07:06.612 "abort": true, 00:07:06.612 "seek_hole": false, 00:07:06.612 "seek_data": false, 00:07:06.612 "copy": true, 00:07:06.612 "nvme_iov_md": false 00:07:06.612 }, 00:07:06.612 "memory_domains": [ 00:07:06.612 { 00:07:06.612 "dma_device_id": "system", 00:07:06.612 "dma_device_type": 1 00:07:06.612 }, 00:07:06.612 { 00:07:06.612 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:06.612 "dma_device_type": 2 00:07:06.612 } 00:07:06.612 ], 00:07:06.612 "driver_specific": {} 00:07:06.612 } 00:07:06.612 ] 00:07:06.612 23:24:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:06.612 23:24:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:07:06.612 23:24:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:07:06.612 23:24:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:06.612 23:24:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:06.612 23:24:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:06.612 23:24:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:06.612 23:24:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:06.612 23:24:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:06.612 23:24:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:06.612 23:24:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:06.612 23:24:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:06.612 23:24:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:06.612 23:24:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:06.612 23:24:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:06.612 23:24:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:06.612 23:24:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:06.612 23:24:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:06.612 "name": "Existed_Raid", 00:07:06.612 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:06.612 "strip_size_kb": 64, 00:07:06.612 "state": "configuring", 00:07:06.612 "raid_level": "raid0", 00:07:06.612 "superblock": false, 00:07:06.612 "num_base_bdevs": 2, 00:07:06.612 "num_base_bdevs_discovered": 1, 00:07:06.612 "num_base_bdevs_operational": 2, 00:07:06.612 "base_bdevs_list": [ 00:07:06.612 { 00:07:06.612 "name": "BaseBdev1", 00:07:06.612 "uuid": "7b608798-ea2b-45dd-bcf6-9d5f66995cd3", 00:07:06.612 "is_configured": true, 00:07:06.612 "data_offset": 0, 00:07:06.612 "data_size": 65536 00:07:06.612 }, 00:07:06.612 { 00:07:06.612 "name": "BaseBdev2", 00:07:06.612 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:06.612 "is_configured": false, 00:07:06.612 "data_offset": 0, 00:07:06.612 "data_size": 0 00:07:06.612 } 00:07:06.612 ] 00:07:06.612 }' 00:07:06.612 23:24:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:06.612 23:24:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:07.181 23:24:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:07.181 23:24:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:07.181 23:24:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:07.181 [2024-09-30 23:24:46.801223] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:07.181 [2024-09-30 23:24:46.801285] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring 00:07:07.181 23:24:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:07.181 23:24:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:07.181 23:24:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:07.181 23:24:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:07.181 [2024-09-30 23:24:46.809223] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:07.181 [2024-09-30 23:24:46.811049] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:07.181 [2024-09-30 23:24:46.811093] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:07.181 23:24:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:07.181 23:24:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:07:07.181 23:24:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:07.181 23:24:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:07:07.181 23:24:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:07.181 23:24:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:07.181 23:24:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:07.181 23:24:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:07.181 23:24:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:07.181 23:24:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:07.181 23:24:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:07.181 23:24:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:07.181 23:24:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:07.181 23:24:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:07.181 23:24:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:07.181 23:24:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:07.181 23:24:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:07.181 23:24:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:07.181 23:24:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:07.181 "name": "Existed_Raid", 00:07:07.181 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:07.181 "strip_size_kb": 64, 00:07:07.181 "state": "configuring", 00:07:07.181 "raid_level": "raid0", 00:07:07.181 "superblock": false, 00:07:07.181 "num_base_bdevs": 2, 00:07:07.181 "num_base_bdevs_discovered": 1, 00:07:07.181 "num_base_bdevs_operational": 2, 00:07:07.181 "base_bdevs_list": [ 00:07:07.181 { 00:07:07.181 "name": "BaseBdev1", 00:07:07.181 "uuid": "7b608798-ea2b-45dd-bcf6-9d5f66995cd3", 00:07:07.181 "is_configured": true, 00:07:07.181 "data_offset": 0, 00:07:07.181 "data_size": 65536 00:07:07.181 }, 00:07:07.181 { 00:07:07.181 "name": "BaseBdev2", 00:07:07.181 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:07.181 "is_configured": false, 00:07:07.181 "data_offset": 0, 00:07:07.181 "data_size": 0 00:07:07.181 } 00:07:07.181 ] 00:07:07.181 }' 00:07:07.181 23:24:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:07.181 23:24:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:07.441 23:24:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:07:07.441 23:24:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:07.441 23:24:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:07.702 [2024-09-30 23:24:47.296323] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:07.702 [2024-09-30 23:24:47.296444] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:07:07.702 [2024-09-30 23:24:47.296478] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:07:07.702 [2024-09-30 23:24:47.297449] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:07:07.702 [2024-09-30 23:24:47.297946] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:07:07.702 [2024-09-30 23:24:47.298021] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980 00:07:07.702 [2024-09-30 23:24:47.298667] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:07.702 BaseBdev2 00:07:07.702 23:24:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:07.702 23:24:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:07:07.702 23:24:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:07:07.702 23:24:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:07:07.702 23:24:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:07:07.702 23:24:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:07:07.702 23:24:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:07:07.702 23:24:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:07:07.702 23:24:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:07.702 23:24:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:07.702 23:24:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:07.702 23:24:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:07.702 23:24:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:07.702 23:24:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:07.702 [ 00:07:07.702 { 00:07:07.702 "name": "BaseBdev2", 00:07:07.702 "aliases": [ 00:07:07.702 "05c2fef9-6cc1-48ac-8d22-092106419096" 00:07:07.702 ], 00:07:07.702 "product_name": "Malloc disk", 00:07:07.702 "block_size": 512, 00:07:07.702 "num_blocks": 65536, 00:07:07.702 "uuid": "05c2fef9-6cc1-48ac-8d22-092106419096", 00:07:07.702 "assigned_rate_limits": { 00:07:07.702 "rw_ios_per_sec": 0, 00:07:07.702 "rw_mbytes_per_sec": 0, 00:07:07.702 "r_mbytes_per_sec": 0, 00:07:07.702 "w_mbytes_per_sec": 0 00:07:07.702 }, 00:07:07.702 "claimed": true, 00:07:07.702 "claim_type": "exclusive_write", 00:07:07.702 "zoned": false, 00:07:07.702 "supported_io_types": { 00:07:07.702 "read": true, 00:07:07.702 "write": true, 00:07:07.702 "unmap": true, 00:07:07.702 "flush": true, 00:07:07.702 "reset": true, 00:07:07.702 "nvme_admin": false, 00:07:07.702 "nvme_io": false, 00:07:07.702 "nvme_io_md": false, 00:07:07.702 "write_zeroes": true, 00:07:07.702 "zcopy": true, 00:07:07.702 "get_zone_info": false, 00:07:07.702 "zone_management": false, 00:07:07.702 "zone_append": false, 00:07:07.702 "compare": false, 00:07:07.702 "compare_and_write": false, 00:07:07.702 "abort": true, 00:07:07.702 "seek_hole": false, 00:07:07.702 "seek_data": false, 00:07:07.702 "copy": true, 00:07:07.702 "nvme_iov_md": false 00:07:07.702 }, 00:07:07.702 "memory_domains": [ 00:07:07.702 { 00:07:07.702 "dma_device_id": "system", 00:07:07.702 "dma_device_type": 1 00:07:07.702 }, 00:07:07.702 { 00:07:07.702 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:07.702 "dma_device_type": 2 00:07:07.702 } 00:07:07.702 ], 00:07:07.702 "driver_specific": {} 00:07:07.702 } 00:07:07.702 ] 00:07:07.702 23:24:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:07.702 23:24:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:07:07.702 23:24:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:07:07.702 23:24:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:07.702 23:24:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:07:07.702 23:24:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:07.702 23:24:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:07.702 23:24:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:07.702 23:24:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:07.702 23:24:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:07.702 23:24:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:07.702 23:24:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:07.702 23:24:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:07.702 23:24:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:07.702 23:24:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:07.702 23:24:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:07.702 23:24:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:07.702 23:24:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:07.702 23:24:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:07.702 23:24:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:07.702 "name": "Existed_Raid", 00:07:07.702 "uuid": "89a0c646-152c-4c2b-85d7-e1a323957cc7", 00:07:07.702 "strip_size_kb": 64, 00:07:07.702 "state": "online", 00:07:07.702 "raid_level": "raid0", 00:07:07.702 "superblock": false, 00:07:07.702 "num_base_bdevs": 2, 00:07:07.702 "num_base_bdevs_discovered": 2, 00:07:07.702 "num_base_bdevs_operational": 2, 00:07:07.702 "base_bdevs_list": [ 00:07:07.702 { 00:07:07.702 "name": "BaseBdev1", 00:07:07.702 "uuid": "7b608798-ea2b-45dd-bcf6-9d5f66995cd3", 00:07:07.702 "is_configured": true, 00:07:07.702 "data_offset": 0, 00:07:07.702 "data_size": 65536 00:07:07.702 }, 00:07:07.702 { 00:07:07.702 "name": "BaseBdev2", 00:07:07.702 "uuid": "05c2fef9-6cc1-48ac-8d22-092106419096", 00:07:07.702 "is_configured": true, 00:07:07.702 "data_offset": 0, 00:07:07.703 "data_size": 65536 00:07:07.703 } 00:07:07.703 ] 00:07:07.703 }' 00:07:07.703 23:24:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:07.703 23:24:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:07.963 23:24:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:07:07.963 23:24:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:07:07.963 23:24:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:07.963 23:24:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:07.963 23:24:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:07.963 23:24:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:07.963 23:24:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:07:07.963 23:24:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:07.963 23:24:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:07.963 23:24:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:07.963 [2024-09-30 23:24:47.767746] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:07.963 23:24:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:07.963 23:24:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:07.963 "name": "Existed_Raid", 00:07:07.963 "aliases": [ 00:07:07.963 "89a0c646-152c-4c2b-85d7-e1a323957cc7" 00:07:07.963 ], 00:07:07.963 "product_name": "Raid Volume", 00:07:07.963 "block_size": 512, 00:07:07.963 "num_blocks": 131072, 00:07:07.963 "uuid": "89a0c646-152c-4c2b-85d7-e1a323957cc7", 00:07:07.963 "assigned_rate_limits": { 00:07:07.963 "rw_ios_per_sec": 0, 00:07:07.963 "rw_mbytes_per_sec": 0, 00:07:07.963 "r_mbytes_per_sec": 0, 00:07:07.963 "w_mbytes_per_sec": 0 00:07:07.963 }, 00:07:07.963 "claimed": false, 00:07:07.963 "zoned": false, 00:07:07.963 "supported_io_types": { 00:07:07.963 "read": true, 00:07:07.963 "write": true, 00:07:07.963 "unmap": true, 00:07:07.963 "flush": true, 00:07:07.963 "reset": true, 00:07:07.963 "nvme_admin": false, 00:07:07.963 "nvme_io": false, 00:07:07.963 "nvme_io_md": false, 00:07:07.963 "write_zeroes": true, 00:07:07.963 "zcopy": false, 00:07:07.963 "get_zone_info": false, 00:07:07.963 "zone_management": false, 00:07:07.963 "zone_append": false, 00:07:07.963 "compare": false, 00:07:07.963 "compare_and_write": false, 00:07:07.963 "abort": false, 00:07:07.963 "seek_hole": false, 00:07:07.963 "seek_data": false, 00:07:07.963 "copy": false, 00:07:07.963 "nvme_iov_md": false 00:07:07.963 }, 00:07:07.963 "memory_domains": [ 00:07:07.963 { 00:07:07.963 "dma_device_id": "system", 00:07:07.963 "dma_device_type": 1 00:07:07.963 }, 00:07:07.963 { 00:07:07.963 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:07.963 "dma_device_type": 2 00:07:07.963 }, 00:07:07.963 { 00:07:07.963 "dma_device_id": "system", 00:07:07.963 "dma_device_type": 1 00:07:07.963 }, 00:07:07.963 { 00:07:07.963 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:07.963 "dma_device_type": 2 00:07:07.963 } 00:07:07.963 ], 00:07:07.963 "driver_specific": { 00:07:07.963 "raid": { 00:07:07.963 "uuid": "89a0c646-152c-4c2b-85d7-e1a323957cc7", 00:07:07.963 "strip_size_kb": 64, 00:07:07.963 "state": "online", 00:07:07.963 "raid_level": "raid0", 00:07:07.963 "superblock": false, 00:07:07.963 "num_base_bdevs": 2, 00:07:07.963 "num_base_bdevs_discovered": 2, 00:07:07.963 "num_base_bdevs_operational": 2, 00:07:07.963 "base_bdevs_list": [ 00:07:07.963 { 00:07:07.963 "name": "BaseBdev1", 00:07:07.963 "uuid": "7b608798-ea2b-45dd-bcf6-9d5f66995cd3", 00:07:07.963 "is_configured": true, 00:07:07.963 "data_offset": 0, 00:07:07.963 "data_size": 65536 00:07:07.963 }, 00:07:07.963 { 00:07:07.963 "name": "BaseBdev2", 00:07:07.963 "uuid": "05c2fef9-6cc1-48ac-8d22-092106419096", 00:07:07.963 "is_configured": true, 00:07:07.963 "data_offset": 0, 00:07:07.963 "data_size": 65536 00:07:07.963 } 00:07:07.963 ] 00:07:07.963 } 00:07:07.963 } 00:07:07.963 }' 00:07:07.963 23:24:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:08.223 23:24:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:07:08.223 BaseBdev2' 00:07:08.223 23:24:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:08.223 23:24:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:08.223 23:24:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:08.223 23:24:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:08.223 23:24:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:07:08.223 23:24:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:08.223 23:24:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:08.223 23:24:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:08.223 23:24:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:08.223 23:24:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:08.223 23:24:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:08.223 23:24:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:07:08.223 23:24:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:08.223 23:24:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:08.223 23:24:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:08.223 23:24:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:08.223 23:24:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:08.223 23:24:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:08.223 23:24:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:07:08.223 23:24:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:08.223 23:24:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:08.223 [2024-09-30 23:24:47.979163] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:08.223 [2024-09-30 23:24:47.979199] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:08.223 [2024-09-30 23:24:47.979247] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:08.223 23:24:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:08.223 23:24:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:07:08.223 23:24:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:07:08.223 23:24:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:08.223 23:24:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:08.223 23:24:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:07:08.223 23:24:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:07:08.223 23:24:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:08.223 23:24:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:07:08.223 23:24:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:08.223 23:24:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:08.223 23:24:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:08.223 23:24:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:08.223 23:24:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:08.223 23:24:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:08.223 23:24:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:08.223 23:24:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:08.223 23:24:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:08.223 23:24:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:08.223 23:24:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:08.223 23:24:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:08.223 23:24:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:08.223 "name": "Existed_Raid", 00:07:08.223 "uuid": "89a0c646-152c-4c2b-85d7-e1a323957cc7", 00:07:08.223 "strip_size_kb": 64, 00:07:08.223 "state": "offline", 00:07:08.223 "raid_level": "raid0", 00:07:08.223 "superblock": false, 00:07:08.223 "num_base_bdevs": 2, 00:07:08.223 "num_base_bdevs_discovered": 1, 00:07:08.223 "num_base_bdevs_operational": 1, 00:07:08.223 "base_bdevs_list": [ 00:07:08.223 { 00:07:08.223 "name": null, 00:07:08.223 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:08.223 "is_configured": false, 00:07:08.223 "data_offset": 0, 00:07:08.223 "data_size": 65536 00:07:08.223 }, 00:07:08.223 { 00:07:08.223 "name": "BaseBdev2", 00:07:08.223 "uuid": "05c2fef9-6cc1-48ac-8d22-092106419096", 00:07:08.223 "is_configured": true, 00:07:08.223 "data_offset": 0, 00:07:08.223 "data_size": 65536 00:07:08.223 } 00:07:08.223 ] 00:07:08.223 }' 00:07:08.223 23:24:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:08.223 23:24:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:08.792 23:24:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:07:08.792 23:24:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:08.792 23:24:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:08.792 23:24:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:07:08.792 23:24:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:08.792 23:24:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:08.792 23:24:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:08.792 23:24:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:07:08.792 23:24:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:08.792 23:24:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:07:08.792 23:24:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:08.792 23:24:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:08.792 [2024-09-30 23:24:48.477561] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:08.792 [2024-09-30 23:24:48.477611] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline 00:07:08.792 23:24:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:08.792 23:24:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:07:08.792 23:24:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:08.792 23:24:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:08.792 23:24:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:07:08.792 23:24:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:08.792 23:24:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:08.792 23:24:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:08.793 23:24:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:07:08.793 23:24:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:07:08.793 23:24:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:07:08.793 23:24:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 72168 00:07:08.793 23:24:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 72168 ']' 00:07:08.793 23:24:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 72168 00:07:08.793 23:24:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:07:08.793 23:24:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:08.793 23:24:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72168 00:07:08.793 23:24:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:08.793 23:24:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:08.793 killing process with pid 72168 00:07:08.793 23:24:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72168' 00:07:08.793 23:24:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 72168 00:07:08.793 [2024-09-30 23:24:48.555146] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:08.793 23:24:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 72168 00:07:08.793 [2024-09-30 23:24:48.556107] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:09.053 23:24:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:07:09.053 00:07:09.053 real 0m3.882s 00:07:09.053 user 0m6.123s 00:07:09.053 sys 0m0.741s 00:07:09.053 23:24:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:09.053 23:24:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:09.053 ************************************ 00:07:09.053 END TEST raid_state_function_test 00:07:09.053 ************************************ 00:07:09.053 23:24:48 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 2 true 00:07:09.053 23:24:48 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:07:09.053 23:24:48 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:09.053 23:24:48 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:09.053 ************************************ 00:07:09.053 START TEST raid_state_function_test_sb 00:07:09.053 ************************************ 00:07:09.053 23:24:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test raid0 2 true 00:07:09.053 23:24:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:07:09.053 23:24:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:07:09.053 23:24:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:07:09.053 23:24:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:07:09.053 23:24:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:07:09.053 23:24:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:09.053 23:24:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:07:09.053 23:24:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:09.053 23:24:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:09.053 23:24:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:07:09.053 23:24:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:09.053 23:24:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:09.053 23:24:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:09.053 23:24:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:07:09.053 23:24:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:07:09.053 23:24:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:07:09.053 23:24:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:07:09.053 23:24:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:07:09.053 23:24:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:07:09.053 23:24:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:07:09.053 23:24:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:07:09.053 23:24:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:07:09.053 23:24:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:07:09.053 23:24:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=72404 00:07:09.053 23:24:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:09.053 Process raid pid: 72404 00:07:09.053 23:24:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 72404' 00:07:09.053 23:24:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 72404 00:07:09.053 23:24:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 72404 ']' 00:07:09.053 23:24:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:09.053 23:24:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:09.053 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:09.053 23:24:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:09.053 23:24:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:09.053 23:24:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:09.313 [2024-09-30 23:24:48.936824] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:07:09.313 [2024-09-30 23:24:48.936967] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:09.313 [2024-09-30 23:24:49.076538] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:09.313 [2024-09-30 23:24:49.119158] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:09.313 [2024-09-30 23:24:49.160997] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:09.313 [2024-09-30 23:24:49.161038] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:10.252 23:24:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:10.252 23:24:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:07:10.252 23:24:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:10.252 23:24:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:10.252 23:24:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:10.252 [2024-09-30 23:24:49.778045] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:10.252 [2024-09-30 23:24:49.778090] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:10.252 [2024-09-30 23:24:49.778117] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:10.252 [2024-09-30 23:24:49.778128] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:10.252 23:24:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:10.252 23:24:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:07:10.252 23:24:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:10.252 23:24:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:10.252 23:24:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:10.252 23:24:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:10.252 23:24:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:10.252 23:24:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:10.252 23:24:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:10.252 23:24:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:10.252 23:24:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:10.252 23:24:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:10.252 23:24:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:10.252 23:24:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:10.252 23:24:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:10.252 23:24:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:10.252 23:24:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:10.252 "name": "Existed_Raid", 00:07:10.252 "uuid": "8aa9ebb3-0934-4dc4-a79b-28b652f12ad9", 00:07:10.252 "strip_size_kb": 64, 00:07:10.252 "state": "configuring", 00:07:10.252 "raid_level": "raid0", 00:07:10.252 "superblock": true, 00:07:10.252 "num_base_bdevs": 2, 00:07:10.252 "num_base_bdevs_discovered": 0, 00:07:10.252 "num_base_bdevs_operational": 2, 00:07:10.252 "base_bdevs_list": [ 00:07:10.252 { 00:07:10.252 "name": "BaseBdev1", 00:07:10.252 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:10.252 "is_configured": false, 00:07:10.252 "data_offset": 0, 00:07:10.252 "data_size": 0 00:07:10.252 }, 00:07:10.252 { 00:07:10.252 "name": "BaseBdev2", 00:07:10.252 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:10.252 "is_configured": false, 00:07:10.252 "data_offset": 0, 00:07:10.252 "data_size": 0 00:07:10.252 } 00:07:10.252 ] 00:07:10.252 }' 00:07:10.252 23:24:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:10.252 23:24:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:10.512 23:24:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:10.512 23:24:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:10.512 23:24:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:10.512 [2024-09-30 23:24:50.189297] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:10.512 [2024-09-30 23:24:50.189348] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring 00:07:10.512 23:24:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:10.512 23:24:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:10.512 23:24:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:10.512 23:24:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:10.512 [2024-09-30 23:24:50.197317] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:10.512 [2024-09-30 23:24:50.197360] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:10.512 [2024-09-30 23:24:50.197368] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:10.512 [2024-09-30 23:24:50.197377] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:10.512 23:24:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:10.512 23:24:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:07:10.512 23:24:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:10.512 23:24:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:10.512 [2024-09-30 23:24:50.214203] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:10.512 BaseBdev1 00:07:10.512 23:24:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:10.512 23:24:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:07:10.512 23:24:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:07:10.512 23:24:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:07:10.512 23:24:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:07:10.512 23:24:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:07:10.512 23:24:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:07:10.512 23:24:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:07:10.512 23:24:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:10.512 23:24:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:10.512 23:24:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:10.512 23:24:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:10.512 23:24:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:10.512 23:24:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:10.512 [ 00:07:10.512 { 00:07:10.512 "name": "BaseBdev1", 00:07:10.512 "aliases": [ 00:07:10.512 "0811fba3-f130-4a61-a869-69bf97319c9e" 00:07:10.512 ], 00:07:10.512 "product_name": "Malloc disk", 00:07:10.512 "block_size": 512, 00:07:10.512 "num_blocks": 65536, 00:07:10.512 "uuid": "0811fba3-f130-4a61-a869-69bf97319c9e", 00:07:10.512 "assigned_rate_limits": { 00:07:10.512 "rw_ios_per_sec": 0, 00:07:10.512 "rw_mbytes_per_sec": 0, 00:07:10.512 "r_mbytes_per_sec": 0, 00:07:10.512 "w_mbytes_per_sec": 0 00:07:10.512 }, 00:07:10.512 "claimed": true, 00:07:10.512 "claim_type": "exclusive_write", 00:07:10.512 "zoned": false, 00:07:10.512 "supported_io_types": { 00:07:10.512 "read": true, 00:07:10.512 "write": true, 00:07:10.512 "unmap": true, 00:07:10.512 "flush": true, 00:07:10.512 "reset": true, 00:07:10.512 "nvme_admin": false, 00:07:10.512 "nvme_io": false, 00:07:10.512 "nvme_io_md": false, 00:07:10.512 "write_zeroes": true, 00:07:10.512 "zcopy": true, 00:07:10.512 "get_zone_info": false, 00:07:10.512 "zone_management": false, 00:07:10.512 "zone_append": false, 00:07:10.512 "compare": false, 00:07:10.512 "compare_and_write": false, 00:07:10.512 "abort": true, 00:07:10.512 "seek_hole": false, 00:07:10.512 "seek_data": false, 00:07:10.512 "copy": true, 00:07:10.512 "nvme_iov_md": false 00:07:10.512 }, 00:07:10.512 "memory_domains": [ 00:07:10.512 { 00:07:10.512 "dma_device_id": "system", 00:07:10.512 "dma_device_type": 1 00:07:10.512 }, 00:07:10.512 { 00:07:10.512 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:10.512 "dma_device_type": 2 00:07:10.512 } 00:07:10.512 ], 00:07:10.512 "driver_specific": {} 00:07:10.512 } 00:07:10.512 ] 00:07:10.512 23:24:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:10.512 23:24:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:07:10.512 23:24:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:07:10.512 23:24:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:10.512 23:24:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:10.512 23:24:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:10.512 23:24:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:10.512 23:24:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:10.512 23:24:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:10.512 23:24:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:10.512 23:24:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:10.512 23:24:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:10.512 23:24:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:10.512 23:24:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:10.512 23:24:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:10.512 23:24:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:10.513 23:24:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:10.513 23:24:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:10.513 "name": "Existed_Raid", 00:07:10.513 "uuid": "b324ccc4-c6bf-4311-bd92-1bdb64ac1fb4", 00:07:10.513 "strip_size_kb": 64, 00:07:10.513 "state": "configuring", 00:07:10.513 "raid_level": "raid0", 00:07:10.513 "superblock": true, 00:07:10.513 "num_base_bdevs": 2, 00:07:10.513 "num_base_bdevs_discovered": 1, 00:07:10.513 "num_base_bdevs_operational": 2, 00:07:10.513 "base_bdevs_list": [ 00:07:10.513 { 00:07:10.513 "name": "BaseBdev1", 00:07:10.513 "uuid": "0811fba3-f130-4a61-a869-69bf97319c9e", 00:07:10.513 "is_configured": true, 00:07:10.513 "data_offset": 2048, 00:07:10.513 "data_size": 63488 00:07:10.513 }, 00:07:10.513 { 00:07:10.513 "name": "BaseBdev2", 00:07:10.513 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:10.513 "is_configured": false, 00:07:10.513 "data_offset": 0, 00:07:10.513 "data_size": 0 00:07:10.513 } 00:07:10.513 ] 00:07:10.513 }' 00:07:10.513 23:24:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:10.513 23:24:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:11.083 23:24:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:11.083 23:24:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:11.083 23:24:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:11.083 [2024-09-30 23:24:50.681426] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:11.083 [2024-09-30 23:24:50.681475] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring 00:07:11.083 23:24:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:11.083 23:24:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:11.083 23:24:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:11.083 23:24:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:11.083 [2024-09-30 23:24:50.693437] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:11.083 [2024-09-30 23:24:50.695258] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:11.083 [2024-09-30 23:24:50.695315] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:11.083 23:24:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:11.083 23:24:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:07:11.083 23:24:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:11.083 23:24:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:07:11.083 23:24:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:11.083 23:24:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:11.083 23:24:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:11.083 23:24:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:11.083 23:24:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:11.083 23:24:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:11.083 23:24:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:11.083 23:24:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:11.083 23:24:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:11.083 23:24:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:11.083 23:24:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:11.083 23:24:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:11.083 23:24:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:11.083 23:24:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:11.083 23:24:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:11.083 "name": "Existed_Raid", 00:07:11.083 "uuid": "f5a8b068-c3dc-4b7c-b9bb-4b41f364a2bf", 00:07:11.083 "strip_size_kb": 64, 00:07:11.083 "state": "configuring", 00:07:11.083 "raid_level": "raid0", 00:07:11.083 "superblock": true, 00:07:11.083 "num_base_bdevs": 2, 00:07:11.083 "num_base_bdevs_discovered": 1, 00:07:11.083 "num_base_bdevs_operational": 2, 00:07:11.083 "base_bdevs_list": [ 00:07:11.083 { 00:07:11.083 "name": "BaseBdev1", 00:07:11.083 "uuid": "0811fba3-f130-4a61-a869-69bf97319c9e", 00:07:11.083 "is_configured": true, 00:07:11.083 "data_offset": 2048, 00:07:11.083 "data_size": 63488 00:07:11.083 }, 00:07:11.083 { 00:07:11.083 "name": "BaseBdev2", 00:07:11.083 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:11.083 "is_configured": false, 00:07:11.083 "data_offset": 0, 00:07:11.083 "data_size": 0 00:07:11.083 } 00:07:11.083 ] 00:07:11.083 }' 00:07:11.083 23:24:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:11.083 23:24:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:11.344 23:24:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:07:11.344 23:24:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:11.344 23:24:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:11.344 [2024-09-30 23:24:51.162313] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:11.344 [2024-09-30 23:24:51.162549] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:07:11.344 [2024-09-30 23:24:51.162574] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:11.344 [2024-09-30 23:24:51.162888] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:07:11.344 [2024-09-30 23:24:51.163058] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:07:11.344 [2024-09-30 23:24:51.163083] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980 00:07:11.344 BaseBdev2 00:07:11.344 [2024-09-30 23:24:51.163218] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:11.344 23:24:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:11.344 23:24:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:07:11.344 23:24:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:07:11.344 23:24:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:07:11.344 23:24:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:07:11.344 23:24:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:07:11.344 23:24:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:07:11.344 23:24:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:07:11.344 23:24:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:11.344 23:24:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:11.344 23:24:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:11.344 23:24:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:11.344 23:24:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:11.344 23:24:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:11.344 [ 00:07:11.344 { 00:07:11.344 "name": "BaseBdev2", 00:07:11.344 "aliases": [ 00:07:11.344 "0a2365b8-4495-4ce7-a9aa-3c972b146504" 00:07:11.344 ], 00:07:11.344 "product_name": "Malloc disk", 00:07:11.344 "block_size": 512, 00:07:11.344 "num_blocks": 65536, 00:07:11.344 "uuid": "0a2365b8-4495-4ce7-a9aa-3c972b146504", 00:07:11.344 "assigned_rate_limits": { 00:07:11.344 "rw_ios_per_sec": 0, 00:07:11.344 "rw_mbytes_per_sec": 0, 00:07:11.344 "r_mbytes_per_sec": 0, 00:07:11.344 "w_mbytes_per_sec": 0 00:07:11.344 }, 00:07:11.344 "claimed": true, 00:07:11.344 "claim_type": "exclusive_write", 00:07:11.344 "zoned": false, 00:07:11.344 "supported_io_types": { 00:07:11.344 "read": true, 00:07:11.344 "write": true, 00:07:11.344 "unmap": true, 00:07:11.344 "flush": true, 00:07:11.344 "reset": true, 00:07:11.344 "nvme_admin": false, 00:07:11.344 "nvme_io": false, 00:07:11.344 "nvme_io_md": false, 00:07:11.344 "write_zeroes": true, 00:07:11.344 "zcopy": true, 00:07:11.344 "get_zone_info": false, 00:07:11.344 "zone_management": false, 00:07:11.344 "zone_append": false, 00:07:11.344 "compare": false, 00:07:11.344 "compare_and_write": false, 00:07:11.344 "abort": true, 00:07:11.344 "seek_hole": false, 00:07:11.344 "seek_data": false, 00:07:11.344 "copy": true, 00:07:11.344 "nvme_iov_md": false 00:07:11.344 }, 00:07:11.603 "memory_domains": [ 00:07:11.603 { 00:07:11.603 "dma_device_id": "system", 00:07:11.603 "dma_device_type": 1 00:07:11.603 }, 00:07:11.603 { 00:07:11.603 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:11.603 "dma_device_type": 2 00:07:11.603 } 00:07:11.603 ], 00:07:11.603 "driver_specific": {} 00:07:11.603 } 00:07:11.603 ] 00:07:11.603 23:24:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:11.603 23:24:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:07:11.603 23:24:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:07:11.603 23:24:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:11.603 23:24:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:07:11.603 23:24:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:11.603 23:24:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:11.603 23:24:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:11.603 23:24:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:11.603 23:24:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:11.603 23:24:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:11.603 23:24:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:11.603 23:24:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:11.603 23:24:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:11.603 23:24:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:11.603 23:24:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:11.603 23:24:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:11.603 23:24:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:11.603 23:24:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:11.603 23:24:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:11.603 "name": "Existed_Raid", 00:07:11.603 "uuid": "f5a8b068-c3dc-4b7c-b9bb-4b41f364a2bf", 00:07:11.603 "strip_size_kb": 64, 00:07:11.603 "state": "online", 00:07:11.603 "raid_level": "raid0", 00:07:11.603 "superblock": true, 00:07:11.603 "num_base_bdevs": 2, 00:07:11.603 "num_base_bdevs_discovered": 2, 00:07:11.603 "num_base_bdevs_operational": 2, 00:07:11.603 "base_bdevs_list": [ 00:07:11.603 { 00:07:11.603 "name": "BaseBdev1", 00:07:11.603 "uuid": "0811fba3-f130-4a61-a869-69bf97319c9e", 00:07:11.603 "is_configured": true, 00:07:11.603 "data_offset": 2048, 00:07:11.603 "data_size": 63488 00:07:11.603 }, 00:07:11.603 { 00:07:11.603 "name": "BaseBdev2", 00:07:11.603 "uuid": "0a2365b8-4495-4ce7-a9aa-3c972b146504", 00:07:11.603 "is_configured": true, 00:07:11.603 "data_offset": 2048, 00:07:11.603 "data_size": 63488 00:07:11.603 } 00:07:11.603 ] 00:07:11.603 }' 00:07:11.603 23:24:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:11.603 23:24:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:11.863 23:24:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:07:11.863 23:24:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:07:11.863 23:24:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:11.863 23:24:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:11.863 23:24:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:07:11.863 23:24:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:11.863 23:24:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:07:11.863 23:24:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:11.863 23:24:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:11.863 23:24:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:11.863 [2024-09-30 23:24:51.653912] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:11.863 23:24:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:11.863 23:24:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:11.863 "name": "Existed_Raid", 00:07:11.863 "aliases": [ 00:07:11.863 "f5a8b068-c3dc-4b7c-b9bb-4b41f364a2bf" 00:07:11.863 ], 00:07:11.863 "product_name": "Raid Volume", 00:07:11.863 "block_size": 512, 00:07:11.863 "num_blocks": 126976, 00:07:11.863 "uuid": "f5a8b068-c3dc-4b7c-b9bb-4b41f364a2bf", 00:07:11.863 "assigned_rate_limits": { 00:07:11.863 "rw_ios_per_sec": 0, 00:07:11.863 "rw_mbytes_per_sec": 0, 00:07:11.863 "r_mbytes_per_sec": 0, 00:07:11.863 "w_mbytes_per_sec": 0 00:07:11.863 }, 00:07:11.863 "claimed": false, 00:07:11.863 "zoned": false, 00:07:11.863 "supported_io_types": { 00:07:11.863 "read": true, 00:07:11.863 "write": true, 00:07:11.863 "unmap": true, 00:07:11.863 "flush": true, 00:07:11.863 "reset": true, 00:07:11.863 "nvme_admin": false, 00:07:11.863 "nvme_io": false, 00:07:11.863 "nvme_io_md": false, 00:07:11.863 "write_zeroes": true, 00:07:11.863 "zcopy": false, 00:07:11.863 "get_zone_info": false, 00:07:11.863 "zone_management": false, 00:07:11.863 "zone_append": false, 00:07:11.863 "compare": false, 00:07:11.863 "compare_and_write": false, 00:07:11.863 "abort": false, 00:07:11.863 "seek_hole": false, 00:07:11.863 "seek_data": false, 00:07:11.863 "copy": false, 00:07:11.863 "nvme_iov_md": false 00:07:11.863 }, 00:07:11.863 "memory_domains": [ 00:07:11.863 { 00:07:11.863 "dma_device_id": "system", 00:07:11.863 "dma_device_type": 1 00:07:11.863 }, 00:07:11.863 { 00:07:11.863 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:11.863 "dma_device_type": 2 00:07:11.863 }, 00:07:11.863 { 00:07:11.863 "dma_device_id": "system", 00:07:11.863 "dma_device_type": 1 00:07:11.863 }, 00:07:11.863 { 00:07:11.863 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:11.863 "dma_device_type": 2 00:07:11.863 } 00:07:11.863 ], 00:07:11.863 "driver_specific": { 00:07:11.863 "raid": { 00:07:11.863 "uuid": "f5a8b068-c3dc-4b7c-b9bb-4b41f364a2bf", 00:07:11.863 "strip_size_kb": 64, 00:07:11.863 "state": "online", 00:07:11.863 "raid_level": "raid0", 00:07:11.863 "superblock": true, 00:07:11.863 "num_base_bdevs": 2, 00:07:11.863 "num_base_bdevs_discovered": 2, 00:07:11.863 "num_base_bdevs_operational": 2, 00:07:11.863 "base_bdevs_list": [ 00:07:11.863 { 00:07:11.863 "name": "BaseBdev1", 00:07:11.863 "uuid": "0811fba3-f130-4a61-a869-69bf97319c9e", 00:07:11.863 "is_configured": true, 00:07:11.863 "data_offset": 2048, 00:07:11.863 "data_size": 63488 00:07:11.863 }, 00:07:11.863 { 00:07:11.863 "name": "BaseBdev2", 00:07:11.863 "uuid": "0a2365b8-4495-4ce7-a9aa-3c972b146504", 00:07:11.863 "is_configured": true, 00:07:11.863 "data_offset": 2048, 00:07:11.863 "data_size": 63488 00:07:11.863 } 00:07:11.863 ] 00:07:11.863 } 00:07:11.863 } 00:07:11.863 }' 00:07:11.863 23:24:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:12.123 23:24:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:07:12.123 BaseBdev2' 00:07:12.123 23:24:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:12.123 23:24:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:12.123 23:24:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:12.123 23:24:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:07:12.123 23:24:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:12.123 23:24:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:12.123 23:24:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:12.123 23:24:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:12.123 23:24:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:12.123 23:24:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:12.123 23:24:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:12.123 23:24:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:12.123 23:24:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:07:12.123 23:24:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:12.123 23:24:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:12.123 23:24:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:12.123 23:24:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:12.123 23:24:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:12.123 23:24:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:07:12.123 23:24:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:12.123 23:24:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:12.123 [2024-09-30 23:24:51.853239] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:12.123 [2024-09-30 23:24:51.853276] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:12.123 [2024-09-30 23:24:51.853330] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:12.123 23:24:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:12.123 23:24:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:07:12.123 23:24:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:07:12.123 23:24:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:12.123 23:24:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:07:12.123 23:24:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:07:12.123 23:24:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:07:12.123 23:24:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:12.123 23:24:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:07:12.123 23:24:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:12.123 23:24:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:12.123 23:24:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:12.123 23:24:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:12.123 23:24:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:12.123 23:24:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:12.123 23:24:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:12.123 23:24:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:12.123 23:24:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:12.123 23:24:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:12.123 23:24:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:12.123 23:24:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:12.123 23:24:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:12.123 "name": "Existed_Raid", 00:07:12.123 "uuid": "f5a8b068-c3dc-4b7c-b9bb-4b41f364a2bf", 00:07:12.123 "strip_size_kb": 64, 00:07:12.123 "state": "offline", 00:07:12.123 "raid_level": "raid0", 00:07:12.123 "superblock": true, 00:07:12.123 "num_base_bdevs": 2, 00:07:12.123 "num_base_bdevs_discovered": 1, 00:07:12.123 "num_base_bdevs_operational": 1, 00:07:12.123 "base_bdevs_list": [ 00:07:12.123 { 00:07:12.123 "name": null, 00:07:12.123 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:12.123 "is_configured": false, 00:07:12.123 "data_offset": 0, 00:07:12.123 "data_size": 63488 00:07:12.123 }, 00:07:12.123 { 00:07:12.123 "name": "BaseBdev2", 00:07:12.123 "uuid": "0a2365b8-4495-4ce7-a9aa-3c972b146504", 00:07:12.124 "is_configured": true, 00:07:12.124 "data_offset": 2048, 00:07:12.124 "data_size": 63488 00:07:12.124 } 00:07:12.124 ] 00:07:12.124 }' 00:07:12.124 23:24:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:12.124 23:24:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:12.694 23:24:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:07:12.694 23:24:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:12.694 23:24:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:07:12.694 23:24:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:12.694 23:24:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:12.694 23:24:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:12.694 23:24:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:12.694 23:24:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:07:12.694 23:24:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:12.694 23:24:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:07:12.694 23:24:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:12.694 23:24:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:12.694 [2024-09-30 23:24:52.331747] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:12.694 [2024-09-30 23:24:52.331819] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline 00:07:12.694 23:24:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:12.694 23:24:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:07:12.694 23:24:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:12.694 23:24:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:12.694 23:24:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:07:12.694 23:24:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:12.694 23:24:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:12.694 23:24:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:12.694 23:24:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:07:12.694 23:24:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:07:12.694 23:24:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:07:12.694 23:24:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 72404 00:07:12.694 23:24:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 72404 ']' 00:07:12.694 23:24:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 72404 00:07:12.694 23:24:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:07:12.694 23:24:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:12.694 23:24:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72404 00:07:12.694 23:24:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:12.694 23:24:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:12.694 killing process with pid 72404 00:07:12.694 23:24:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72404' 00:07:12.694 23:24:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 72404 00:07:12.694 [2024-09-30 23:24:52.435984] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:12.694 23:24:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 72404 00:07:12.694 [2024-09-30 23:24:52.436958] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:12.955 23:24:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:07:12.955 00:07:12.955 real 0m3.821s 00:07:12.955 user 0m6.017s 00:07:12.955 sys 0m0.741s 00:07:12.955 23:24:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:12.955 23:24:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:12.955 ************************************ 00:07:12.955 END TEST raid_state_function_test_sb 00:07:12.955 ************************************ 00:07:12.955 23:24:52 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 2 00:07:12.955 23:24:52 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:07:12.955 23:24:52 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:12.955 23:24:52 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:12.955 ************************************ 00:07:12.955 START TEST raid_superblock_test 00:07:12.955 ************************************ 00:07:12.955 23:24:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test raid0 2 00:07:12.955 23:24:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:07:12.955 23:24:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:07:12.955 23:24:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:07:12.955 23:24:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:07:12.955 23:24:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:07:12.955 23:24:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:07:12.955 23:24:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:07:12.955 23:24:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:07:12.955 23:24:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:07:12.955 23:24:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:07:12.955 23:24:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:07:12.955 23:24:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:07:12.955 23:24:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:07:12.955 23:24:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:07:12.955 23:24:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:07:12.955 23:24:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:07:12.955 23:24:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=72640 00:07:12.955 23:24:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:07:12.955 23:24:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 72640 00:07:12.955 23:24:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 72640 ']' 00:07:12.955 23:24:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:12.955 23:24:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:12.955 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:12.955 23:24:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:12.955 23:24:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:12.955 23:24:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:13.215 [2024-09-30 23:24:52.834286] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:07:13.216 [2024-09-30 23:24:52.834808] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72640 ] 00:07:13.216 [2024-09-30 23:24:52.994396] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:13.216 [2024-09-30 23:24:53.037769] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:13.475 [2024-09-30 23:24:53.080084] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:13.475 [2024-09-30 23:24:53.080137] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:14.045 23:24:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:14.045 23:24:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:07:14.045 23:24:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:07:14.045 23:24:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:14.045 23:24:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:07:14.045 23:24:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:07:14.045 23:24:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:07:14.045 23:24:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:07:14.045 23:24:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:07:14.045 23:24:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:07:14.045 23:24:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:07:14.045 23:24:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:14.045 23:24:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:14.045 malloc1 00:07:14.045 23:24:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:14.045 23:24:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:14.045 23:24:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:14.045 23:24:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:14.045 [2024-09-30 23:24:53.674163] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:14.045 [2024-09-30 23:24:53.674233] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:14.045 [2024-09-30 23:24:53.674257] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:07:14.045 [2024-09-30 23:24:53.674272] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:14.045 [2024-09-30 23:24:53.676366] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:14.046 [2024-09-30 23:24:53.676402] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:14.046 pt1 00:07:14.046 23:24:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:14.046 23:24:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:07:14.046 23:24:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:14.046 23:24:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:07:14.046 23:24:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:07:14.046 23:24:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:07:14.046 23:24:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:07:14.046 23:24:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:07:14.046 23:24:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:07:14.046 23:24:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:07:14.046 23:24:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:14.046 23:24:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:14.046 malloc2 00:07:14.046 23:24:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:14.046 23:24:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:14.046 23:24:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:14.046 23:24:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:14.046 [2024-09-30 23:24:53.710735] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:14.046 [2024-09-30 23:24:53.710800] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:14.046 [2024-09-30 23:24:53.710818] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:07:14.046 [2024-09-30 23:24:53.710831] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:14.046 [2024-09-30 23:24:53.712955] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:14.046 [2024-09-30 23:24:53.712989] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:14.046 pt2 00:07:14.046 23:24:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:14.046 23:24:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:07:14.046 23:24:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:14.046 23:24:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:07:14.046 23:24:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:14.046 23:24:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:14.046 [2024-09-30 23:24:53.722754] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:14.046 [2024-09-30 23:24:53.724555] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:14.046 [2024-09-30 23:24:53.724683] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:07:14.046 [2024-09-30 23:24:53.724708] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:14.046 [2024-09-30 23:24:53.724970] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:07:14.046 [2024-09-30 23:24:53.725099] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:07:14.046 [2024-09-30 23:24:53.725112] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:07:14.046 [2024-09-30 23:24:53.725227] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:14.046 23:24:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:14.046 23:24:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:14.046 23:24:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:14.046 23:24:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:14.046 23:24:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:14.046 23:24:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:14.046 23:24:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:14.046 23:24:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:14.046 23:24:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:14.046 23:24:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:14.046 23:24:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:14.046 23:24:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:14.046 23:24:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:14.046 23:24:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:14.046 23:24:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:14.046 23:24:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:14.046 23:24:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:14.046 "name": "raid_bdev1", 00:07:14.046 "uuid": "edf52d9f-8a2c-42d7-87c2-07b156f5f34b", 00:07:14.046 "strip_size_kb": 64, 00:07:14.046 "state": "online", 00:07:14.046 "raid_level": "raid0", 00:07:14.046 "superblock": true, 00:07:14.046 "num_base_bdevs": 2, 00:07:14.046 "num_base_bdevs_discovered": 2, 00:07:14.046 "num_base_bdevs_operational": 2, 00:07:14.046 "base_bdevs_list": [ 00:07:14.046 { 00:07:14.046 "name": "pt1", 00:07:14.046 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:14.046 "is_configured": true, 00:07:14.046 "data_offset": 2048, 00:07:14.046 "data_size": 63488 00:07:14.046 }, 00:07:14.046 { 00:07:14.046 "name": "pt2", 00:07:14.046 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:14.046 "is_configured": true, 00:07:14.046 "data_offset": 2048, 00:07:14.046 "data_size": 63488 00:07:14.046 } 00:07:14.046 ] 00:07:14.046 }' 00:07:14.046 23:24:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:14.046 23:24:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:14.615 23:24:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:07:14.615 23:24:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:07:14.615 23:24:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:14.615 23:24:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:14.615 23:24:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:14.615 23:24:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:14.615 23:24:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:14.615 23:24:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:14.615 23:24:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:14.615 23:24:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:14.615 [2024-09-30 23:24:54.186212] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:14.615 23:24:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:14.615 23:24:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:14.615 "name": "raid_bdev1", 00:07:14.615 "aliases": [ 00:07:14.615 "edf52d9f-8a2c-42d7-87c2-07b156f5f34b" 00:07:14.615 ], 00:07:14.615 "product_name": "Raid Volume", 00:07:14.615 "block_size": 512, 00:07:14.615 "num_blocks": 126976, 00:07:14.615 "uuid": "edf52d9f-8a2c-42d7-87c2-07b156f5f34b", 00:07:14.615 "assigned_rate_limits": { 00:07:14.615 "rw_ios_per_sec": 0, 00:07:14.615 "rw_mbytes_per_sec": 0, 00:07:14.615 "r_mbytes_per_sec": 0, 00:07:14.615 "w_mbytes_per_sec": 0 00:07:14.615 }, 00:07:14.615 "claimed": false, 00:07:14.615 "zoned": false, 00:07:14.615 "supported_io_types": { 00:07:14.615 "read": true, 00:07:14.615 "write": true, 00:07:14.615 "unmap": true, 00:07:14.615 "flush": true, 00:07:14.615 "reset": true, 00:07:14.615 "nvme_admin": false, 00:07:14.615 "nvme_io": false, 00:07:14.615 "nvme_io_md": false, 00:07:14.615 "write_zeroes": true, 00:07:14.615 "zcopy": false, 00:07:14.615 "get_zone_info": false, 00:07:14.615 "zone_management": false, 00:07:14.615 "zone_append": false, 00:07:14.615 "compare": false, 00:07:14.615 "compare_and_write": false, 00:07:14.615 "abort": false, 00:07:14.615 "seek_hole": false, 00:07:14.615 "seek_data": false, 00:07:14.615 "copy": false, 00:07:14.615 "nvme_iov_md": false 00:07:14.615 }, 00:07:14.615 "memory_domains": [ 00:07:14.615 { 00:07:14.615 "dma_device_id": "system", 00:07:14.615 "dma_device_type": 1 00:07:14.615 }, 00:07:14.615 { 00:07:14.615 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:14.615 "dma_device_type": 2 00:07:14.615 }, 00:07:14.615 { 00:07:14.615 "dma_device_id": "system", 00:07:14.615 "dma_device_type": 1 00:07:14.615 }, 00:07:14.615 { 00:07:14.615 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:14.615 "dma_device_type": 2 00:07:14.615 } 00:07:14.615 ], 00:07:14.615 "driver_specific": { 00:07:14.615 "raid": { 00:07:14.615 "uuid": "edf52d9f-8a2c-42d7-87c2-07b156f5f34b", 00:07:14.615 "strip_size_kb": 64, 00:07:14.615 "state": "online", 00:07:14.615 "raid_level": "raid0", 00:07:14.615 "superblock": true, 00:07:14.615 "num_base_bdevs": 2, 00:07:14.615 "num_base_bdevs_discovered": 2, 00:07:14.615 "num_base_bdevs_operational": 2, 00:07:14.615 "base_bdevs_list": [ 00:07:14.615 { 00:07:14.615 "name": "pt1", 00:07:14.615 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:14.615 "is_configured": true, 00:07:14.615 "data_offset": 2048, 00:07:14.615 "data_size": 63488 00:07:14.615 }, 00:07:14.615 { 00:07:14.615 "name": "pt2", 00:07:14.615 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:14.615 "is_configured": true, 00:07:14.615 "data_offset": 2048, 00:07:14.615 "data_size": 63488 00:07:14.615 } 00:07:14.615 ] 00:07:14.615 } 00:07:14.615 } 00:07:14.615 }' 00:07:14.615 23:24:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:14.615 23:24:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:07:14.615 pt2' 00:07:14.616 23:24:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:14.616 23:24:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:14.616 23:24:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:14.616 23:24:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:14.616 23:24:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:07:14.616 23:24:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:14.616 23:24:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:14.616 23:24:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:14.616 23:24:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:14.616 23:24:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:14.616 23:24:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:14.616 23:24:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:14.616 23:24:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:07:14.616 23:24:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:14.616 23:24:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:14.616 23:24:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:14.616 23:24:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:14.616 23:24:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:14.616 23:24:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:14.616 23:24:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:14.616 23:24:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:14.616 23:24:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:07:14.616 [2024-09-30 23:24:54.429704] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:14.616 23:24:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:14.879 23:24:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=edf52d9f-8a2c-42d7-87c2-07b156f5f34b 00:07:14.879 23:24:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z edf52d9f-8a2c-42d7-87c2-07b156f5f34b ']' 00:07:14.879 23:24:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:14.879 23:24:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:14.879 23:24:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:14.879 [2024-09-30 23:24:54.477396] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:14.879 [2024-09-30 23:24:54.477461] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:14.879 [2024-09-30 23:24:54.477561] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:14.879 [2024-09-30 23:24:54.477635] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:14.879 [2024-09-30 23:24:54.477710] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:07:14.879 23:24:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:14.879 23:24:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:07:14.879 23:24:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:14.879 23:24:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:14.879 23:24:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:14.879 23:24:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:14.879 23:24:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:07:14.879 23:24:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:07:14.879 23:24:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:07:14.879 23:24:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:07:14.879 23:24:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:14.879 23:24:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:14.879 23:24:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:14.879 23:24:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:07:14.879 23:24:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:07:14.879 23:24:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:14.879 23:24:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:14.879 23:24:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:14.879 23:24:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:07:14.879 23:24:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:07:14.879 23:24:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:14.879 23:24:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:14.879 23:24:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:14.879 23:24:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:07:14.879 23:24:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:14.879 23:24:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:07:14.879 23:24:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:14.879 23:24:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:07:14.879 23:24:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:14.879 23:24:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:07:14.879 23:24:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:14.879 23:24:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:14.879 23:24:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:14.879 23:24:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:14.879 [2024-09-30 23:24:54.597229] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:07:14.879 [2024-09-30 23:24:54.599166] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:07:14.879 [2024-09-30 23:24:54.599273] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:07:14.879 [2024-09-30 23:24:54.599356] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:07:14.879 [2024-09-30 23:24:54.599414] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:14.879 [2024-09-30 23:24:54.599443] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state configuring 00:07:14.879 request: 00:07:14.879 { 00:07:14.879 "name": "raid_bdev1", 00:07:14.879 "raid_level": "raid0", 00:07:14.879 "base_bdevs": [ 00:07:14.879 "malloc1", 00:07:14.879 "malloc2" 00:07:14.879 ], 00:07:14.879 "strip_size_kb": 64, 00:07:14.879 "superblock": false, 00:07:14.879 "method": "bdev_raid_create", 00:07:14.879 "req_id": 1 00:07:14.879 } 00:07:14.879 Got JSON-RPC error response 00:07:14.879 response: 00:07:14.879 { 00:07:14.879 "code": -17, 00:07:14.879 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:07:14.879 } 00:07:14.879 23:24:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:07:14.879 23:24:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:07:14.879 23:24:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:14.879 23:24:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:14.879 23:24:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:14.879 23:24:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:14.879 23:24:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:07:14.879 23:24:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:14.879 23:24:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:14.879 23:24:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:14.879 23:24:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:07:14.879 23:24:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:07:14.879 23:24:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:14.879 23:24:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:14.879 23:24:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:14.879 [2024-09-30 23:24:54.661077] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:14.879 [2024-09-30 23:24:54.661156] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:14.879 [2024-09-30 23:24:54.661189] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:07:14.879 [2024-09-30 23:24:54.661215] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:14.879 [2024-09-30 23:24:54.663293] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:14.879 [2024-09-30 23:24:54.663359] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:14.879 [2024-09-30 23:24:54.663441] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:07:14.879 [2024-09-30 23:24:54.663499] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:14.879 pt1 00:07:14.879 23:24:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:14.879 23:24:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 2 00:07:14.879 23:24:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:14.879 23:24:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:14.879 23:24:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:14.879 23:24:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:14.879 23:24:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:14.879 23:24:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:14.879 23:24:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:14.879 23:24:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:14.879 23:24:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:14.879 23:24:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:14.879 23:24:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:14.879 23:24:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:14.879 23:24:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:14.879 23:24:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:14.879 23:24:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:14.879 "name": "raid_bdev1", 00:07:14.879 "uuid": "edf52d9f-8a2c-42d7-87c2-07b156f5f34b", 00:07:14.879 "strip_size_kb": 64, 00:07:14.879 "state": "configuring", 00:07:14.879 "raid_level": "raid0", 00:07:14.879 "superblock": true, 00:07:14.879 "num_base_bdevs": 2, 00:07:14.879 "num_base_bdevs_discovered": 1, 00:07:14.879 "num_base_bdevs_operational": 2, 00:07:14.879 "base_bdevs_list": [ 00:07:14.879 { 00:07:14.879 "name": "pt1", 00:07:14.879 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:14.879 "is_configured": true, 00:07:14.879 "data_offset": 2048, 00:07:14.879 "data_size": 63488 00:07:14.879 }, 00:07:14.879 { 00:07:14.879 "name": null, 00:07:14.879 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:14.879 "is_configured": false, 00:07:14.879 "data_offset": 2048, 00:07:14.879 "data_size": 63488 00:07:14.879 } 00:07:14.879 ] 00:07:14.879 }' 00:07:14.879 23:24:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:14.879 23:24:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:15.448 23:24:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:07:15.448 23:24:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:07:15.448 23:24:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:07:15.448 23:24:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:15.448 23:24:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:15.448 23:24:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:15.448 [2024-09-30 23:24:55.076363] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:15.448 [2024-09-30 23:24:55.076455] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:15.448 [2024-09-30 23:24:55.076494] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:07:15.448 [2024-09-30 23:24:55.076521] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:15.448 [2024-09-30 23:24:55.076924] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:15.448 [2024-09-30 23:24:55.076974] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:15.448 [2024-09-30 23:24:55.077064] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:07:15.448 [2024-09-30 23:24:55.077107] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:15.448 [2024-09-30 23:24:55.077206] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:07:15.449 [2024-09-30 23:24:55.077244] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:15.449 [2024-09-30 23:24:55.077476] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:07:15.449 [2024-09-30 23:24:55.077614] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:07:15.449 [2024-09-30 23:24:55.077657] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006980 00:07:15.449 [2024-09-30 23:24:55.077787] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:15.449 pt2 00:07:15.449 23:24:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:15.449 23:24:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:07:15.449 23:24:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:07:15.449 23:24:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:15.449 23:24:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:15.449 23:24:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:15.449 23:24:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:15.449 23:24:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:15.449 23:24:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:15.449 23:24:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:15.449 23:24:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:15.449 23:24:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:15.449 23:24:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:15.449 23:24:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:15.449 23:24:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:15.449 23:24:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:15.449 23:24:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:15.449 23:24:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:15.449 23:24:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:15.449 "name": "raid_bdev1", 00:07:15.449 "uuid": "edf52d9f-8a2c-42d7-87c2-07b156f5f34b", 00:07:15.449 "strip_size_kb": 64, 00:07:15.449 "state": "online", 00:07:15.449 "raid_level": "raid0", 00:07:15.449 "superblock": true, 00:07:15.449 "num_base_bdevs": 2, 00:07:15.449 "num_base_bdevs_discovered": 2, 00:07:15.449 "num_base_bdevs_operational": 2, 00:07:15.449 "base_bdevs_list": [ 00:07:15.449 { 00:07:15.449 "name": "pt1", 00:07:15.449 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:15.449 "is_configured": true, 00:07:15.449 "data_offset": 2048, 00:07:15.449 "data_size": 63488 00:07:15.449 }, 00:07:15.449 { 00:07:15.449 "name": "pt2", 00:07:15.449 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:15.449 "is_configured": true, 00:07:15.449 "data_offset": 2048, 00:07:15.449 "data_size": 63488 00:07:15.449 } 00:07:15.449 ] 00:07:15.449 }' 00:07:15.449 23:24:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:15.449 23:24:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:15.708 23:24:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:07:15.708 23:24:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:07:15.708 23:24:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:15.708 23:24:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:15.708 23:24:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:15.708 23:24:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:15.708 23:24:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:15.708 23:24:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:15.709 23:24:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:15.709 23:24:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:15.709 [2024-09-30 23:24:55.531850] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:15.709 23:24:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:15.968 23:24:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:15.968 "name": "raid_bdev1", 00:07:15.968 "aliases": [ 00:07:15.968 "edf52d9f-8a2c-42d7-87c2-07b156f5f34b" 00:07:15.968 ], 00:07:15.968 "product_name": "Raid Volume", 00:07:15.968 "block_size": 512, 00:07:15.968 "num_blocks": 126976, 00:07:15.968 "uuid": "edf52d9f-8a2c-42d7-87c2-07b156f5f34b", 00:07:15.968 "assigned_rate_limits": { 00:07:15.968 "rw_ios_per_sec": 0, 00:07:15.968 "rw_mbytes_per_sec": 0, 00:07:15.968 "r_mbytes_per_sec": 0, 00:07:15.968 "w_mbytes_per_sec": 0 00:07:15.968 }, 00:07:15.968 "claimed": false, 00:07:15.968 "zoned": false, 00:07:15.968 "supported_io_types": { 00:07:15.968 "read": true, 00:07:15.968 "write": true, 00:07:15.968 "unmap": true, 00:07:15.968 "flush": true, 00:07:15.968 "reset": true, 00:07:15.968 "nvme_admin": false, 00:07:15.968 "nvme_io": false, 00:07:15.968 "nvme_io_md": false, 00:07:15.968 "write_zeroes": true, 00:07:15.968 "zcopy": false, 00:07:15.968 "get_zone_info": false, 00:07:15.968 "zone_management": false, 00:07:15.968 "zone_append": false, 00:07:15.968 "compare": false, 00:07:15.968 "compare_and_write": false, 00:07:15.968 "abort": false, 00:07:15.968 "seek_hole": false, 00:07:15.968 "seek_data": false, 00:07:15.968 "copy": false, 00:07:15.968 "nvme_iov_md": false 00:07:15.968 }, 00:07:15.968 "memory_domains": [ 00:07:15.968 { 00:07:15.968 "dma_device_id": "system", 00:07:15.968 "dma_device_type": 1 00:07:15.968 }, 00:07:15.968 { 00:07:15.968 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:15.968 "dma_device_type": 2 00:07:15.968 }, 00:07:15.968 { 00:07:15.968 "dma_device_id": "system", 00:07:15.968 "dma_device_type": 1 00:07:15.969 }, 00:07:15.969 { 00:07:15.969 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:15.969 "dma_device_type": 2 00:07:15.969 } 00:07:15.969 ], 00:07:15.969 "driver_specific": { 00:07:15.969 "raid": { 00:07:15.969 "uuid": "edf52d9f-8a2c-42d7-87c2-07b156f5f34b", 00:07:15.969 "strip_size_kb": 64, 00:07:15.969 "state": "online", 00:07:15.969 "raid_level": "raid0", 00:07:15.969 "superblock": true, 00:07:15.969 "num_base_bdevs": 2, 00:07:15.969 "num_base_bdevs_discovered": 2, 00:07:15.969 "num_base_bdevs_operational": 2, 00:07:15.969 "base_bdevs_list": [ 00:07:15.969 { 00:07:15.969 "name": "pt1", 00:07:15.969 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:15.969 "is_configured": true, 00:07:15.969 "data_offset": 2048, 00:07:15.969 "data_size": 63488 00:07:15.969 }, 00:07:15.969 { 00:07:15.969 "name": "pt2", 00:07:15.969 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:15.969 "is_configured": true, 00:07:15.969 "data_offset": 2048, 00:07:15.969 "data_size": 63488 00:07:15.969 } 00:07:15.969 ] 00:07:15.969 } 00:07:15.969 } 00:07:15.969 }' 00:07:15.969 23:24:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:15.969 23:24:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:07:15.969 pt2' 00:07:15.969 23:24:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:15.969 23:24:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:15.969 23:24:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:15.969 23:24:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:15.969 23:24:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:07:15.969 23:24:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:15.969 23:24:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:15.969 23:24:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:15.969 23:24:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:15.969 23:24:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:15.969 23:24:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:15.969 23:24:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:07:15.969 23:24:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:15.969 23:24:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:15.969 23:24:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:15.969 23:24:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:15.969 23:24:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:15.969 23:24:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:15.969 23:24:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:07:15.969 23:24:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:15.969 23:24:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:15.969 23:24:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:15.969 [2024-09-30 23:24:55.767405] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:15.969 23:24:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:15.969 23:24:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' edf52d9f-8a2c-42d7-87c2-07b156f5f34b '!=' edf52d9f-8a2c-42d7-87c2-07b156f5f34b ']' 00:07:15.969 23:24:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:07:15.969 23:24:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:15.969 23:24:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:15.969 23:24:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 72640 00:07:15.969 23:24:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 72640 ']' 00:07:15.969 23:24:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 72640 00:07:15.969 23:24:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:07:15.969 23:24:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:15.969 23:24:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72640 00:07:16.251 23:24:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:16.251 23:24:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:16.251 23:24:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72640' 00:07:16.251 killing process with pid 72640 00:07:16.251 23:24:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 72640 00:07:16.251 [2024-09-30 23:24:55.840640] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:16.251 [2024-09-30 23:24:55.840839] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:16.251 23:24:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 72640 00:07:16.251 [2024-09-30 23:24:55.840975] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:16.251 [2024-09-30 23:24:55.841026] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name raid_bdev1, state offline 00:07:16.251 [2024-09-30 23:24:55.864463] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:16.529 23:24:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:07:16.529 00:07:16.529 real 0m3.365s 00:07:16.529 user 0m5.165s 00:07:16.529 sys 0m0.710s 00:07:16.529 23:24:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:16.529 23:24:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:16.529 ************************************ 00:07:16.529 END TEST raid_superblock_test 00:07:16.529 ************************************ 00:07:16.529 23:24:56 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 2 read 00:07:16.529 23:24:56 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:07:16.529 23:24:56 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:16.529 23:24:56 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:16.529 ************************************ 00:07:16.529 START TEST raid_read_error_test 00:07:16.529 ************************************ 00:07:16.529 23:24:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid0 2 read 00:07:16.529 23:24:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:07:16.529 23:24:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:07:16.529 23:24:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:07:16.529 23:24:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:07:16.529 23:24:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:16.529 23:24:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:07:16.529 23:24:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:16.529 23:24:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:16.529 23:24:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:07:16.529 23:24:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:16.529 23:24:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:16.529 23:24:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:16.529 23:24:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:07:16.529 23:24:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:07:16.529 23:24:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:07:16.529 23:24:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:07:16.529 23:24:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:07:16.529 23:24:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:07:16.529 23:24:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:07:16.529 23:24:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:07:16.529 23:24:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:07:16.529 23:24:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:07:16.529 23:24:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.LGtAzHif3p 00:07:16.529 23:24:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=72846 00:07:16.529 23:24:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:07:16.529 23:24:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 72846 00:07:16.529 23:24:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 72846 ']' 00:07:16.529 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:16.529 23:24:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:16.529 23:24:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:16.529 23:24:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:16.529 23:24:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:16.529 23:24:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:16.529 [2024-09-30 23:24:56.297539] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:07:16.529 [2024-09-30 23:24:56.297725] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72846 ] 00:07:16.789 [2024-09-30 23:24:56.468963] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:16.789 [2024-09-30 23:24:56.515549] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:16.789 [2024-09-30 23:24:56.558434] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:16.789 [2024-09-30 23:24:56.558473] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:17.360 23:24:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:17.360 23:24:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:07:17.360 23:24:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:17.360 23:24:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:07:17.360 23:24:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:17.360 23:24:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:17.360 BaseBdev1_malloc 00:07:17.360 23:24:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:17.360 23:24:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:07:17.360 23:24:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:17.360 23:24:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:17.360 true 00:07:17.360 23:24:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:17.360 23:24:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:07:17.360 23:24:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:17.360 23:24:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:17.360 [2024-09-30 23:24:57.149295] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:07:17.360 [2024-09-30 23:24:57.149367] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:17.361 [2024-09-30 23:24:57.149390] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:07:17.361 [2024-09-30 23:24:57.149401] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:17.361 [2024-09-30 23:24:57.151456] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:17.361 [2024-09-30 23:24:57.151501] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:07:17.361 BaseBdev1 00:07:17.361 23:24:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:17.361 23:24:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:17.361 23:24:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:07:17.361 23:24:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:17.361 23:24:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:17.361 BaseBdev2_malloc 00:07:17.361 23:24:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:17.361 23:24:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:07:17.361 23:24:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:17.361 23:24:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:17.361 true 00:07:17.361 23:24:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:17.361 23:24:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:07:17.361 23:24:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:17.361 23:24:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:17.361 [2024-09-30 23:24:57.202774] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:07:17.361 [2024-09-30 23:24:57.202906] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:17.361 [2024-09-30 23:24:57.202932] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:07:17.361 [2024-09-30 23:24:57.202941] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:17.361 [2024-09-30 23:24:57.205057] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:17.361 [2024-09-30 23:24:57.205090] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:07:17.361 BaseBdev2 00:07:17.361 23:24:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:17.361 23:24:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:07:17.361 23:24:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:17.361 23:24:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:17.619 [2024-09-30 23:24:57.214804] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:17.619 [2024-09-30 23:24:57.216790] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:17.619 [2024-09-30 23:24:57.216991] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:07:17.619 [2024-09-30 23:24:57.217006] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:17.619 [2024-09-30 23:24:57.217243] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:07:17.619 [2024-09-30 23:24:57.217371] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:07:17.619 [2024-09-30 23:24:57.217384] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006980 00:07:17.619 [2024-09-30 23:24:57.217510] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:17.619 23:24:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:17.619 23:24:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:17.619 23:24:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:17.619 23:24:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:17.619 23:24:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:17.619 23:24:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:17.619 23:24:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:17.620 23:24:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:17.620 23:24:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:17.620 23:24:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:17.620 23:24:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:17.620 23:24:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:17.620 23:24:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:17.620 23:24:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:17.620 23:24:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:17.620 23:24:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:17.620 23:24:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:17.620 "name": "raid_bdev1", 00:07:17.620 "uuid": "986af16b-f660-452b-9168-507598ea652d", 00:07:17.620 "strip_size_kb": 64, 00:07:17.620 "state": "online", 00:07:17.620 "raid_level": "raid0", 00:07:17.620 "superblock": true, 00:07:17.620 "num_base_bdevs": 2, 00:07:17.620 "num_base_bdevs_discovered": 2, 00:07:17.620 "num_base_bdevs_operational": 2, 00:07:17.620 "base_bdevs_list": [ 00:07:17.620 { 00:07:17.620 "name": "BaseBdev1", 00:07:17.620 "uuid": "9e71caa5-4b74-5458-8e0a-a891651c6ba5", 00:07:17.620 "is_configured": true, 00:07:17.620 "data_offset": 2048, 00:07:17.620 "data_size": 63488 00:07:17.620 }, 00:07:17.620 { 00:07:17.620 "name": "BaseBdev2", 00:07:17.620 "uuid": "3a6ebad4-fd6e-5e0f-8433-58b8ac88fb7a", 00:07:17.620 "is_configured": true, 00:07:17.620 "data_offset": 2048, 00:07:17.620 "data_size": 63488 00:07:17.620 } 00:07:17.620 ] 00:07:17.620 }' 00:07:17.620 23:24:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:17.620 23:24:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:17.878 23:24:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:07:17.878 23:24:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:07:18.138 [2024-09-30 23:24:57.766367] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:19.075 23:24:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:07:19.075 23:24:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:19.075 23:24:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:19.075 23:24:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:19.075 23:24:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:07:19.075 23:24:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:07:19.075 23:24:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:07:19.075 23:24:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:19.075 23:24:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:19.075 23:24:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:19.075 23:24:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:19.075 23:24:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:19.075 23:24:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:19.075 23:24:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:19.075 23:24:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:19.075 23:24:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:19.075 23:24:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:19.075 23:24:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:19.075 23:24:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:19.075 23:24:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:19.075 23:24:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:19.075 23:24:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:19.075 23:24:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:19.075 "name": "raid_bdev1", 00:07:19.075 "uuid": "986af16b-f660-452b-9168-507598ea652d", 00:07:19.075 "strip_size_kb": 64, 00:07:19.075 "state": "online", 00:07:19.075 "raid_level": "raid0", 00:07:19.075 "superblock": true, 00:07:19.075 "num_base_bdevs": 2, 00:07:19.075 "num_base_bdevs_discovered": 2, 00:07:19.075 "num_base_bdevs_operational": 2, 00:07:19.075 "base_bdevs_list": [ 00:07:19.075 { 00:07:19.075 "name": "BaseBdev1", 00:07:19.075 "uuid": "9e71caa5-4b74-5458-8e0a-a891651c6ba5", 00:07:19.075 "is_configured": true, 00:07:19.075 "data_offset": 2048, 00:07:19.075 "data_size": 63488 00:07:19.075 }, 00:07:19.075 { 00:07:19.075 "name": "BaseBdev2", 00:07:19.075 "uuid": "3a6ebad4-fd6e-5e0f-8433-58b8ac88fb7a", 00:07:19.075 "is_configured": true, 00:07:19.075 "data_offset": 2048, 00:07:19.075 "data_size": 63488 00:07:19.075 } 00:07:19.075 ] 00:07:19.075 }' 00:07:19.076 23:24:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:19.076 23:24:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:19.337 23:24:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:19.337 23:24:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:19.337 23:24:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:19.337 [2024-09-30 23:24:59.125800] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:19.337 [2024-09-30 23:24:59.125836] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:19.337 [2024-09-30 23:24:59.128298] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:19.337 [2024-09-30 23:24:59.128341] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:19.337 [2024-09-30 23:24:59.128373] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:19.337 [2024-09-30 23:24:59.128381] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name raid_bdev1, state offline 00:07:19.337 { 00:07:19.337 "results": [ 00:07:19.337 { 00:07:19.337 "job": "raid_bdev1", 00:07:19.337 "core_mask": "0x1", 00:07:19.337 "workload": "randrw", 00:07:19.337 "percentage": 50, 00:07:19.337 "status": "finished", 00:07:19.337 "queue_depth": 1, 00:07:19.337 "io_size": 131072, 00:07:19.337 "runtime": 1.360344, 00:07:19.337 "iops": 17860.92341348953, 00:07:19.337 "mibps": 2232.6154266861913, 00:07:19.337 "io_failed": 1, 00:07:19.337 "io_timeout": 0, 00:07:19.337 "avg_latency_us": 77.49528305921991, 00:07:19.337 "min_latency_us": 24.482096069868994, 00:07:19.337 "max_latency_us": 1416.6078602620087 00:07:19.337 } 00:07:19.337 ], 00:07:19.337 "core_count": 1 00:07:19.337 } 00:07:19.337 23:24:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:19.337 23:24:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 72846 00:07:19.337 23:24:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 72846 ']' 00:07:19.337 23:24:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 72846 00:07:19.337 23:24:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:07:19.337 23:24:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:19.338 23:24:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72846 00:07:19.338 killing process with pid 72846 00:07:19.338 23:24:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:19.338 23:24:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:19.338 23:24:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72846' 00:07:19.338 23:24:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 72846 00:07:19.338 [2024-09-30 23:24:59.173787] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:19.338 23:24:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 72846 00:07:19.338 [2024-09-30 23:24:59.189071] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:19.600 23:24:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.LGtAzHif3p 00:07:19.600 23:24:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:07:19.600 23:24:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:07:19.600 23:24:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.74 00:07:19.600 23:24:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:07:19.600 23:24:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:19.600 23:24:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:19.600 23:24:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.74 != \0\.\0\0 ]] 00:07:19.601 00:07:19.601 real 0m3.243s 00:07:19.601 user 0m4.077s 00:07:19.601 sys 0m0.553s 00:07:19.601 23:24:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:19.601 23:24:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:19.601 ************************************ 00:07:19.601 END TEST raid_read_error_test 00:07:19.601 ************************************ 00:07:19.860 23:24:59 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 2 write 00:07:19.860 23:24:59 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:07:19.860 23:24:59 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:19.860 23:24:59 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:19.860 ************************************ 00:07:19.860 START TEST raid_write_error_test 00:07:19.860 ************************************ 00:07:19.860 23:24:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid0 2 write 00:07:19.860 23:24:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:07:19.860 23:24:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:07:19.860 23:24:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:07:19.860 23:24:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:07:19.860 23:24:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:19.860 23:24:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:07:19.860 23:24:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:19.860 23:24:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:19.860 23:24:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:07:19.860 23:24:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:19.860 23:24:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:19.860 23:24:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:19.860 23:24:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:07:19.860 23:24:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:07:19.860 23:24:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:07:19.860 23:24:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:07:19.860 23:24:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:07:19.860 23:24:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:07:19.860 23:24:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:07:19.860 23:24:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:07:19.860 23:24:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:07:19.860 23:24:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:07:19.860 23:24:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.kLsiyxalNU 00:07:19.860 23:24:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=72975 00:07:19.860 23:24:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:07:19.860 23:24:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 72975 00:07:19.860 23:24:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 72975 ']' 00:07:19.860 23:24:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:19.860 23:24:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:19.860 23:24:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:19.860 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:19.860 23:24:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:19.860 23:24:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:19.860 [2024-09-30 23:24:59.612370] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:07:19.860 [2024-09-30 23:24:59.612500] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72975 ] 00:07:20.120 [2024-09-30 23:24:59.776386] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:20.120 [2024-09-30 23:24:59.821266] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:20.120 [2024-09-30 23:24:59.863905] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:20.120 [2024-09-30 23:24:59.863949] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:20.689 23:25:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:20.689 23:25:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:07:20.689 23:25:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:20.689 23:25:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:07:20.689 23:25:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:20.689 23:25:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:20.689 BaseBdev1_malloc 00:07:20.689 23:25:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:20.689 23:25:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:07:20.689 23:25:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:20.689 23:25:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:20.689 true 00:07:20.689 23:25:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:20.689 23:25:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:07:20.689 23:25:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:20.689 23:25:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:20.689 [2024-09-30 23:25:00.462459] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:07:20.689 [2024-09-30 23:25:00.462510] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:20.689 [2024-09-30 23:25:00.462530] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:07:20.689 [2024-09-30 23:25:00.462538] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:20.689 [2024-09-30 23:25:00.464644] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:20.689 [2024-09-30 23:25:00.464679] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:07:20.689 BaseBdev1 00:07:20.689 23:25:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:20.689 23:25:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:20.689 23:25:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:07:20.689 23:25:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:20.689 23:25:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:20.689 BaseBdev2_malloc 00:07:20.689 23:25:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:20.689 23:25:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:07:20.689 23:25:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:20.689 23:25:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:20.689 true 00:07:20.689 23:25:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:20.689 23:25:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:07:20.689 23:25:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:20.689 23:25:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:20.689 [2024-09-30 23:25:00.510887] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:07:20.689 [2024-09-30 23:25:00.510940] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:20.689 [2024-09-30 23:25:00.510959] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:07:20.689 [2024-09-30 23:25:00.510968] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:20.689 [2024-09-30 23:25:00.513035] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:20.689 [2024-09-30 23:25:00.513066] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:07:20.689 BaseBdev2 00:07:20.689 23:25:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:20.689 23:25:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:07:20.689 23:25:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:20.689 23:25:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:20.689 [2024-09-30 23:25:00.522901] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:20.689 [2024-09-30 23:25:00.524714] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:20.689 [2024-09-30 23:25:00.524903] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:07:20.689 [2024-09-30 23:25:00.524917] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:20.689 [2024-09-30 23:25:00.525167] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:07:20.689 [2024-09-30 23:25:00.525321] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:07:20.689 [2024-09-30 23:25:00.525347] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006980 00:07:20.689 [2024-09-30 23:25:00.525474] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:20.689 23:25:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:20.689 23:25:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:20.689 23:25:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:20.689 23:25:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:20.689 23:25:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:20.689 23:25:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:20.689 23:25:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:20.689 23:25:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:20.689 23:25:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:20.689 23:25:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:20.689 23:25:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:20.689 23:25:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:20.690 23:25:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:20.690 23:25:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:20.690 23:25:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:20.949 23:25:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:20.949 23:25:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:20.949 "name": "raid_bdev1", 00:07:20.949 "uuid": "880b4811-d442-4c43-a7c0-908346bd4afd", 00:07:20.949 "strip_size_kb": 64, 00:07:20.949 "state": "online", 00:07:20.949 "raid_level": "raid0", 00:07:20.949 "superblock": true, 00:07:20.949 "num_base_bdevs": 2, 00:07:20.949 "num_base_bdevs_discovered": 2, 00:07:20.949 "num_base_bdevs_operational": 2, 00:07:20.949 "base_bdevs_list": [ 00:07:20.949 { 00:07:20.949 "name": "BaseBdev1", 00:07:20.949 "uuid": "3db81f3f-2390-52f5-95c0-9139ad702252", 00:07:20.949 "is_configured": true, 00:07:20.949 "data_offset": 2048, 00:07:20.949 "data_size": 63488 00:07:20.949 }, 00:07:20.949 { 00:07:20.949 "name": "BaseBdev2", 00:07:20.949 "uuid": "94689df3-cd91-524a-b782-665e02a76431", 00:07:20.949 "is_configured": true, 00:07:20.949 "data_offset": 2048, 00:07:20.949 "data_size": 63488 00:07:20.950 } 00:07:20.950 ] 00:07:20.950 }' 00:07:20.950 23:25:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:20.950 23:25:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:21.208 23:25:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:07:21.208 23:25:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:07:21.208 [2024-09-30 23:25:01.054540] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:22.147 23:25:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:07:22.147 23:25:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:22.147 23:25:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:22.147 23:25:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:22.147 23:25:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:07:22.147 23:25:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:07:22.147 23:25:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:07:22.147 23:25:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:22.147 23:25:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:22.147 23:25:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:22.147 23:25:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:22.147 23:25:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:22.147 23:25:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:22.147 23:25:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:22.147 23:25:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:22.147 23:25:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:22.147 23:25:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:22.147 23:25:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:22.147 23:25:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:22.147 23:25:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:22.147 23:25:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:22.407 23:25:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:22.407 23:25:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:22.407 "name": "raid_bdev1", 00:07:22.407 "uuid": "880b4811-d442-4c43-a7c0-908346bd4afd", 00:07:22.407 "strip_size_kb": 64, 00:07:22.407 "state": "online", 00:07:22.407 "raid_level": "raid0", 00:07:22.407 "superblock": true, 00:07:22.407 "num_base_bdevs": 2, 00:07:22.407 "num_base_bdevs_discovered": 2, 00:07:22.407 "num_base_bdevs_operational": 2, 00:07:22.407 "base_bdevs_list": [ 00:07:22.407 { 00:07:22.407 "name": "BaseBdev1", 00:07:22.407 "uuid": "3db81f3f-2390-52f5-95c0-9139ad702252", 00:07:22.407 "is_configured": true, 00:07:22.407 "data_offset": 2048, 00:07:22.407 "data_size": 63488 00:07:22.407 }, 00:07:22.407 { 00:07:22.407 "name": "BaseBdev2", 00:07:22.407 "uuid": "94689df3-cd91-524a-b782-665e02a76431", 00:07:22.407 "is_configured": true, 00:07:22.407 "data_offset": 2048, 00:07:22.407 "data_size": 63488 00:07:22.407 } 00:07:22.407 ] 00:07:22.407 }' 00:07:22.407 23:25:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:22.407 23:25:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:22.667 23:25:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:22.667 23:25:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:22.667 23:25:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:22.667 [2024-09-30 23:25:02.409960] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:22.667 [2024-09-30 23:25:02.409994] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:22.667 [2024-09-30 23:25:02.412389] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:22.667 [2024-09-30 23:25:02.412432] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:22.667 [2024-09-30 23:25:02.412464] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:22.667 [2024-09-30 23:25:02.412483] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name raid_bdev1, state offline 00:07:22.667 23:25:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:22.667 23:25:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 72975 00:07:22.667 23:25:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 72975 ']' 00:07:22.667 23:25:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 72975 00:07:22.667 { 00:07:22.667 "results": [ 00:07:22.667 { 00:07:22.667 "job": "raid_bdev1", 00:07:22.667 "core_mask": "0x1", 00:07:22.667 "workload": "randrw", 00:07:22.667 "percentage": 50, 00:07:22.667 "status": "finished", 00:07:22.667 "queue_depth": 1, 00:07:22.667 "io_size": 131072, 00:07:22.667 "runtime": 1.356354, 00:07:22.667 "iops": 17796.976305595737, 00:07:22.667 "mibps": 2224.622038199467, 00:07:22.667 "io_failed": 1, 00:07:22.667 "io_timeout": 0, 00:07:22.667 "avg_latency_us": 77.81145298712387, 00:07:22.667 "min_latency_us": 24.482096069868994, 00:07:22.667 "max_latency_us": 1409.4532751091704 00:07:22.667 } 00:07:22.667 ], 00:07:22.667 "core_count": 1 00:07:22.667 } 00:07:22.667 23:25:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:07:22.667 23:25:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:22.667 23:25:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72975 00:07:22.667 23:25:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:22.667 23:25:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:22.667 killing process with pid 72975 00:07:22.667 23:25:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72975' 00:07:22.667 23:25:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 72975 00:07:22.667 [2024-09-30 23:25:02.456302] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:22.667 23:25:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 72975 00:07:22.667 [2024-09-30 23:25:02.471292] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:22.927 23:25:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.kLsiyxalNU 00:07:22.927 23:25:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:07:22.927 23:25:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:07:22.927 23:25:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.74 00:07:22.927 23:25:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:07:22.927 23:25:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:22.927 23:25:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:22.927 23:25:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.74 != \0\.\0\0 ]] 00:07:22.927 00:07:22.927 real 0m3.215s 00:07:22.927 user 0m4.057s 00:07:22.927 sys 0m0.541s 00:07:22.927 23:25:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:22.927 23:25:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:22.927 ************************************ 00:07:22.927 END TEST raid_write_error_test 00:07:22.927 ************************************ 00:07:22.927 23:25:02 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:07:22.927 23:25:02 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 2 false 00:07:22.927 23:25:02 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:07:22.927 23:25:02 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:22.927 23:25:02 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:23.186 ************************************ 00:07:23.186 START TEST raid_state_function_test 00:07:23.186 ************************************ 00:07:23.186 23:25:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test concat 2 false 00:07:23.186 23:25:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:07:23.186 23:25:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:07:23.186 23:25:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:07:23.186 23:25:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:07:23.186 23:25:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:07:23.186 23:25:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:23.186 23:25:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:07:23.186 23:25:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:23.186 23:25:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:23.186 23:25:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:07:23.186 23:25:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:23.186 23:25:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:23.186 23:25:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:23.186 23:25:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:07:23.186 23:25:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:07:23.186 23:25:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:07:23.186 23:25:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:07:23.186 23:25:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:07:23.186 23:25:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:07:23.186 23:25:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:07:23.186 23:25:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:07:23.186 23:25:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:07:23.186 23:25:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:07:23.186 23:25:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=73102 00:07:23.187 23:25:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:23.187 Process raid pid: 73102 00:07:23.187 23:25:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 73102' 00:07:23.187 23:25:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 73102 00:07:23.187 23:25:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 73102 ']' 00:07:23.187 23:25:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:23.187 23:25:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:23.187 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:23.187 23:25:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:23.187 23:25:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:23.187 23:25:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:23.187 [2024-09-30 23:25:02.894889] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:07:23.187 [2024-09-30 23:25:02.895021] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:23.446 [2024-09-30 23:25:03.061954] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:23.446 [2024-09-30 23:25:03.106127] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:23.446 [2024-09-30 23:25:03.147648] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:23.446 [2024-09-30 23:25:03.147688] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:24.014 23:25:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:24.014 23:25:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:07:24.014 23:25:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:24.014 23:25:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:24.014 23:25:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:24.014 [2024-09-30 23:25:03.716796] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:24.014 [2024-09-30 23:25:03.716839] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:24.014 [2024-09-30 23:25:03.716851] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:24.014 [2024-09-30 23:25:03.716872] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:24.014 23:25:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:24.014 23:25:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:07:24.014 23:25:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:24.014 23:25:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:24.014 23:25:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:24.014 23:25:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:24.014 23:25:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:24.014 23:25:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:24.014 23:25:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:24.014 23:25:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:24.014 23:25:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:24.014 23:25:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:24.014 23:25:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:24.014 23:25:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:24.014 23:25:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:24.014 23:25:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:24.014 23:25:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:24.014 "name": "Existed_Raid", 00:07:24.014 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:24.014 "strip_size_kb": 64, 00:07:24.015 "state": "configuring", 00:07:24.015 "raid_level": "concat", 00:07:24.015 "superblock": false, 00:07:24.015 "num_base_bdevs": 2, 00:07:24.015 "num_base_bdevs_discovered": 0, 00:07:24.015 "num_base_bdevs_operational": 2, 00:07:24.015 "base_bdevs_list": [ 00:07:24.015 { 00:07:24.015 "name": "BaseBdev1", 00:07:24.015 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:24.015 "is_configured": false, 00:07:24.015 "data_offset": 0, 00:07:24.015 "data_size": 0 00:07:24.015 }, 00:07:24.015 { 00:07:24.015 "name": "BaseBdev2", 00:07:24.015 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:24.015 "is_configured": false, 00:07:24.015 "data_offset": 0, 00:07:24.015 "data_size": 0 00:07:24.015 } 00:07:24.015 ] 00:07:24.015 }' 00:07:24.015 23:25:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:24.015 23:25:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:24.583 23:25:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:24.583 23:25:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:24.583 23:25:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:24.583 [2024-09-30 23:25:04.151988] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:24.583 [2024-09-30 23:25:04.152041] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring 00:07:24.583 23:25:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:24.583 23:25:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:24.583 23:25:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:24.583 23:25:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:24.583 [2024-09-30 23:25:04.164028] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:24.583 [2024-09-30 23:25:04.164063] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:24.583 [2024-09-30 23:25:04.164072] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:24.583 [2024-09-30 23:25:04.164080] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:24.583 23:25:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:24.583 23:25:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:07:24.583 23:25:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:24.583 23:25:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:24.583 [2024-09-30 23:25:04.184899] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:24.583 BaseBdev1 00:07:24.583 23:25:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:24.583 23:25:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:07:24.583 23:25:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:07:24.583 23:25:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:07:24.583 23:25:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:07:24.583 23:25:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:07:24.583 23:25:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:07:24.584 23:25:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:07:24.584 23:25:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:24.584 23:25:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:24.584 23:25:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:24.584 23:25:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:24.584 23:25:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:24.584 23:25:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:24.584 [ 00:07:24.584 { 00:07:24.584 "name": "BaseBdev1", 00:07:24.584 "aliases": [ 00:07:24.584 "6f3a623a-1768-4185-a737-45db291b0b30" 00:07:24.584 ], 00:07:24.584 "product_name": "Malloc disk", 00:07:24.584 "block_size": 512, 00:07:24.584 "num_blocks": 65536, 00:07:24.584 "uuid": "6f3a623a-1768-4185-a737-45db291b0b30", 00:07:24.584 "assigned_rate_limits": { 00:07:24.584 "rw_ios_per_sec": 0, 00:07:24.584 "rw_mbytes_per_sec": 0, 00:07:24.584 "r_mbytes_per_sec": 0, 00:07:24.584 "w_mbytes_per_sec": 0 00:07:24.584 }, 00:07:24.584 "claimed": true, 00:07:24.584 "claim_type": "exclusive_write", 00:07:24.584 "zoned": false, 00:07:24.584 "supported_io_types": { 00:07:24.584 "read": true, 00:07:24.584 "write": true, 00:07:24.584 "unmap": true, 00:07:24.584 "flush": true, 00:07:24.584 "reset": true, 00:07:24.584 "nvme_admin": false, 00:07:24.584 "nvme_io": false, 00:07:24.584 "nvme_io_md": false, 00:07:24.584 "write_zeroes": true, 00:07:24.584 "zcopy": true, 00:07:24.584 "get_zone_info": false, 00:07:24.584 "zone_management": false, 00:07:24.584 "zone_append": false, 00:07:24.584 "compare": false, 00:07:24.584 "compare_and_write": false, 00:07:24.584 "abort": true, 00:07:24.584 "seek_hole": false, 00:07:24.584 "seek_data": false, 00:07:24.584 "copy": true, 00:07:24.584 "nvme_iov_md": false 00:07:24.584 }, 00:07:24.584 "memory_domains": [ 00:07:24.584 { 00:07:24.584 "dma_device_id": "system", 00:07:24.584 "dma_device_type": 1 00:07:24.584 }, 00:07:24.584 { 00:07:24.584 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:24.584 "dma_device_type": 2 00:07:24.584 } 00:07:24.584 ], 00:07:24.584 "driver_specific": {} 00:07:24.584 } 00:07:24.584 ] 00:07:24.584 23:25:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:24.584 23:25:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:07:24.584 23:25:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:07:24.584 23:25:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:24.584 23:25:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:24.584 23:25:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:24.584 23:25:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:24.584 23:25:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:24.584 23:25:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:24.584 23:25:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:24.584 23:25:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:24.584 23:25:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:24.584 23:25:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:24.584 23:25:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:24.584 23:25:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:24.584 23:25:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:24.584 23:25:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:24.584 23:25:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:24.584 "name": "Existed_Raid", 00:07:24.584 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:24.584 "strip_size_kb": 64, 00:07:24.584 "state": "configuring", 00:07:24.584 "raid_level": "concat", 00:07:24.584 "superblock": false, 00:07:24.584 "num_base_bdevs": 2, 00:07:24.584 "num_base_bdevs_discovered": 1, 00:07:24.584 "num_base_bdevs_operational": 2, 00:07:24.584 "base_bdevs_list": [ 00:07:24.584 { 00:07:24.584 "name": "BaseBdev1", 00:07:24.584 "uuid": "6f3a623a-1768-4185-a737-45db291b0b30", 00:07:24.584 "is_configured": true, 00:07:24.584 "data_offset": 0, 00:07:24.584 "data_size": 65536 00:07:24.584 }, 00:07:24.584 { 00:07:24.584 "name": "BaseBdev2", 00:07:24.584 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:24.584 "is_configured": false, 00:07:24.584 "data_offset": 0, 00:07:24.584 "data_size": 0 00:07:24.584 } 00:07:24.584 ] 00:07:24.584 }' 00:07:24.584 23:25:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:24.584 23:25:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:24.843 23:25:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:24.843 23:25:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:24.843 23:25:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:24.843 [2024-09-30 23:25:04.660085] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:24.843 [2024-09-30 23:25:04.660134] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring 00:07:24.843 23:25:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:24.843 23:25:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:24.843 23:25:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:24.843 23:25:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:24.843 [2024-09-30 23:25:04.672095] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:24.843 [2024-09-30 23:25:04.673943] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:24.843 [2024-09-30 23:25:04.673979] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:24.843 23:25:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:24.843 23:25:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:07:24.843 23:25:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:24.843 23:25:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:07:24.843 23:25:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:24.843 23:25:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:24.843 23:25:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:24.843 23:25:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:24.843 23:25:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:24.843 23:25:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:24.843 23:25:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:24.843 23:25:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:24.843 23:25:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:24.843 23:25:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:24.843 23:25:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:24.843 23:25:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:24.843 23:25:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:25.102 23:25:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:25.102 23:25:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:25.102 "name": "Existed_Raid", 00:07:25.102 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:25.102 "strip_size_kb": 64, 00:07:25.102 "state": "configuring", 00:07:25.102 "raid_level": "concat", 00:07:25.102 "superblock": false, 00:07:25.102 "num_base_bdevs": 2, 00:07:25.102 "num_base_bdevs_discovered": 1, 00:07:25.102 "num_base_bdevs_operational": 2, 00:07:25.102 "base_bdevs_list": [ 00:07:25.102 { 00:07:25.102 "name": "BaseBdev1", 00:07:25.102 "uuid": "6f3a623a-1768-4185-a737-45db291b0b30", 00:07:25.102 "is_configured": true, 00:07:25.102 "data_offset": 0, 00:07:25.102 "data_size": 65536 00:07:25.102 }, 00:07:25.102 { 00:07:25.102 "name": "BaseBdev2", 00:07:25.102 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:25.102 "is_configured": false, 00:07:25.102 "data_offset": 0, 00:07:25.102 "data_size": 0 00:07:25.102 } 00:07:25.102 ] 00:07:25.102 }' 00:07:25.102 23:25:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:25.102 23:25:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:25.362 23:25:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:07:25.362 23:25:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:25.362 23:25:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:25.362 [2024-09-30 23:25:05.102320] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:25.362 [2024-09-30 23:25:05.102473] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:07:25.362 [2024-09-30 23:25:05.102534] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:07:25.362 [2024-09-30 23:25:05.103543] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:07:25.362 [2024-09-30 23:25:05.104078] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:07:25.362 [2024-09-30 23:25:05.104158] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980 00:07:25.362 [2024-09-30 23:25:05.104811] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:25.362 BaseBdev2 00:07:25.362 23:25:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:25.362 23:25:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:07:25.362 23:25:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:07:25.362 23:25:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:07:25.362 23:25:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:07:25.362 23:25:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:07:25.362 23:25:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:07:25.362 23:25:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:07:25.362 23:25:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:25.362 23:25:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:25.362 23:25:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:25.362 23:25:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:25.362 23:25:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:25.362 23:25:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:25.362 [ 00:07:25.362 { 00:07:25.362 "name": "BaseBdev2", 00:07:25.362 "aliases": [ 00:07:25.362 "1a78f91f-d481-45ba-8dc8-506b43e6cd10" 00:07:25.362 ], 00:07:25.362 "product_name": "Malloc disk", 00:07:25.362 "block_size": 512, 00:07:25.362 "num_blocks": 65536, 00:07:25.362 "uuid": "1a78f91f-d481-45ba-8dc8-506b43e6cd10", 00:07:25.362 "assigned_rate_limits": { 00:07:25.362 "rw_ios_per_sec": 0, 00:07:25.362 "rw_mbytes_per_sec": 0, 00:07:25.362 "r_mbytes_per_sec": 0, 00:07:25.362 "w_mbytes_per_sec": 0 00:07:25.362 }, 00:07:25.362 "claimed": true, 00:07:25.362 "claim_type": "exclusive_write", 00:07:25.362 "zoned": false, 00:07:25.362 "supported_io_types": { 00:07:25.362 "read": true, 00:07:25.362 "write": true, 00:07:25.362 "unmap": true, 00:07:25.362 "flush": true, 00:07:25.362 "reset": true, 00:07:25.362 "nvme_admin": false, 00:07:25.362 "nvme_io": false, 00:07:25.362 "nvme_io_md": false, 00:07:25.362 "write_zeroes": true, 00:07:25.362 "zcopy": true, 00:07:25.362 "get_zone_info": false, 00:07:25.362 "zone_management": false, 00:07:25.362 "zone_append": false, 00:07:25.362 "compare": false, 00:07:25.362 "compare_and_write": false, 00:07:25.362 "abort": true, 00:07:25.362 "seek_hole": false, 00:07:25.362 "seek_data": false, 00:07:25.362 "copy": true, 00:07:25.362 "nvme_iov_md": false 00:07:25.362 }, 00:07:25.362 "memory_domains": [ 00:07:25.362 { 00:07:25.362 "dma_device_id": "system", 00:07:25.362 "dma_device_type": 1 00:07:25.362 }, 00:07:25.362 { 00:07:25.362 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:25.362 "dma_device_type": 2 00:07:25.362 } 00:07:25.362 ], 00:07:25.362 "driver_specific": {} 00:07:25.362 } 00:07:25.362 ] 00:07:25.362 23:25:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:25.362 23:25:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:07:25.362 23:25:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:07:25.362 23:25:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:25.362 23:25:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:07:25.362 23:25:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:25.362 23:25:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:25.362 23:25:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:25.363 23:25:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:25.363 23:25:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:25.363 23:25:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:25.363 23:25:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:25.363 23:25:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:25.363 23:25:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:25.363 23:25:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:25.363 23:25:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:25.363 23:25:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:25.363 23:25:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:25.363 23:25:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:25.363 23:25:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:25.363 "name": "Existed_Raid", 00:07:25.363 "uuid": "368bb00f-07e2-47da-a186-ec7ce1b88c55", 00:07:25.363 "strip_size_kb": 64, 00:07:25.363 "state": "online", 00:07:25.363 "raid_level": "concat", 00:07:25.363 "superblock": false, 00:07:25.363 "num_base_bdevs": 2, 00:07:25.363 "num_base_bdevs_discovered": 2, 00:07:25.363 "num_base_bdevs_operational": 2, 00:07:25.363 "base_bdevs_list": [ 00:07:25.363 { 00:07:25.363 "name": "BaseBdev1", 00:07:25.363 "uuid": "6f3a623a-1768-4185-a737-45db291b0b30", 00:07:25.363 "is_configured": true, 00:07:25.363 "data_offset": 0, 00:07:25.363 "data_size": 65536 00:07:25.363 }, 00:07:25.363 { 00:07:25.363 "name": "BaseBdev2", 00:07:25.363 "uuid": "1a78f91f-d481-45ba-8dc8-506b43e6cd10", 00:07:25.363 "is_configured": true, 00:07:25.363 "data_offset": 0, 00:07:25.363 "data_size": 65536 00:07:25.363 } 00:07:25.363 ] 00:07:25.363 }' 00:07:25.363 23:25:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:25.363 23:25:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:25.931 23:25:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:07:25.931 23:25:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:07:25.931 23:25:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:25.931 23:25:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:25.931 23:25:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:25.931 23:25:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:25.931 23:25:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:07:25.931 23:25:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:25.931 23:25:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:25.931 23:25:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:25.931 [2024-09-30 23:25:05.593749] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:25.931 23:25:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:25.931 23:25:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:25.931 "name": "Existed_Raid", 00:07:25.931 "aliases": [ 00:07:25.931 "368bb00f-07e2-47da-a186-ec7ce1b88c55" 00:07:25.931 ], 00:07:25.931 "product_name": "Raid Volume", 00:07:25.931 "block_size": 512, 00:07:25.931 "num_blocks": 131072, 00:07:25.931 "uuid": "368bb00f-07e2-47da-a186-ec7ce1b88c55", 00:07:25.931 "assigned_rate_limits": { 00:07:25.931 "rw_ios_per_sec": 0, 00:07:25.931 "rw_mbytes_per_sec": 0, 00:07:25.931 "r_mbytes_per_sec": 0, 00:07:25.931 "w_mbytes_per_sec": 0 00:07:25.931 }, 00:07:25.931 "claimed": false, 00:07:25.931 "zoned": false, 00:07:25.931 "supported_io_types": { 00:07:25.931 "read": true, 00:07:25.931 "write": true, 00:07:25.931 "unmap": true, 00:07:25.931 "flush": true, 00:07:25.931 "reset": true, 00:07:25.931 "nvme_admin": false, 00:07:25.931 "nvme_io": false, 00:07:25.931 "nvme_io_md": false, 00:07:25.931 "write_zeroes": true, 00:07:25.931 "zcopy": false, 00:07:25.931 "get_zone_info": false, 00:07:25.931 "zone_management": false, 00:07:25.931 "zone_append": false, 00:07:25.931 "compare": false, 00:07:25.931 "compare_and_write": false, 00:07:25.931 "abort": false, 00:07:25.931 "seek_hole": false, 00:07:25.931 "seek_data": false, 00:07:25.931 "copy": false, 00:07:25.931 "nvme_iov_md": false 00:07:25.931 }, 00:07:25.931 "memory_domains": [ 00:07:25.931 { 00:07:25.931 "dma_device_id": "system", 00:07:25.931 "dma_device_type": 1 00:07:25.931 }, 00:07:25.931 { 00:07:25.931 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:25.931 "dma_device_type": 2 00:07:25.931 }, 00:07:25.931 { 00:07:25.931 "dma_device_id": "system", 00:07:25.931 "dma_device_type": 1 00:07:25.931 }, 00:07:25.931 { 00:07:25.931 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:25.931 "dma_device_type": 2 00:07:25.931 } 00:07:25.931 ], 00:07:25.931 "driver_specific": { 00:07:25.931 "raid": { 00:07:25.931 "uuid": "368bb00f-07e2-47da-a186-ec7ce1b88c55", 00:07:25.931 "strip_size_kb": 64, 00:07:25.931 "state": "online", 00:07:25.931 "raid_level": "concat", 00:07:25.931 "superblock": false, 00:07:25.931 "num_base_bdevs": 2, 00:07:25.931 "num_base_bdevs_discovered": 2, 00:07:25.931 "num_base_bdevs_operational": 2, 00:07:25.931 "base_bdevs_list": [ 00:07:25.931 { 00:07:25.931 "name": "BaseBdev1", 00:07:25.931 "uuid": "6f3a623a-1768-4185-a737-45db291b0b30", 00:07:25.931 "is_configured": true, 00:07:25.931 "data_offset": 0, 00:07:25.931 "data_size": 65536 00:07:25.931 }, 00:07:25.931 { 00:07:25.931 "name": "BaseBdev2", 00:07:25.931 "uuid": "1a78f91f-d481-45ba-8dc8-506b43e6cd10", 00:07:25.931 "is_configured": true, 00:07:25.931 "data_offset": 0, 00:07:25.931 "data_size": 65536 00:07:25.931 } 00:07:25.931 ] 00:07:25.931 } 00:07:25.931 } 00:07:25.931 }' 00:07:25.931 23:25:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:25.931 23:25:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:07:25.931 BaseBdev2' 00:07:25.931 23:25:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:25.931 23:25:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:25.931 23:25:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:25.931 23:25:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:25.931 23:25:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:07:25.931 23:25:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:25.931 23:25:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:25.931 23:25:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:25.931 23:25:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:25.931 23:25:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:25.931 23:25:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:25.931 23:25:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:07:25.931 23:25:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:25.931 23:25:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:25.931 23:25:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:25.931 23:25:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:26.190 23:25:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:26.190 23:25:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:26.190 23:25:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:07:26.190 23:25:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:26.190 23:25:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:26.190 [2024-09-30 23:25:05.817127] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:26.190 [2024-09-30 23:25:05.817162] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:26.190 [2024-09-30 23:25:05.817211] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:26.190 23:25:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:26.190 23:25:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:07:26.190 23:25:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:07:26.190 23:25:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:26.190 23:25:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:26.190 23:25:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:07:26.190 23:25:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:07:26.190 23:25:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:26.190 23:25:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:07:26.190 23:25:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:26.190 23:25:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:26.190 23:25:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:26.190 23:25:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:26.190 23:25:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:26.190 23:25:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:26.190 23:25:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:26.190 23:25:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:26.190 23:25:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:26.190 23:25:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:26.190 23:25:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:26.190 23:25:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:26.190 23:25:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:26.190 "name": "Existed_Raid", 00:07:26.190 "uuid": "368bb00f-07e2-47da-a186-ec7ce1b88c55", 00:07:26.190 "strip_size_kb": 64, 00:07:26.190 "state": "offline", 00:07:26.190 "raid_level": "concat", 00:07:26.190 "superblock": false, 00:07:26.190 "num_base_bdevs": 2, 00:07:26.190 "num_base_bdevs_discovered": 1, 00:07:26.190 "num_base_bdevs_operational": 1, 00:07:26.190 "base_bdevs_list": [ 00:07:26.190 { 00:07:26.190 "name": null, 00:07:26.190 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:26.190 "is_configured": false, 00:07:26.190 "data_offset": 0, 00:07:26.190 "data_size": 65536 00:07:26.190 }, 00:07:26.190 { 00:07:26.190 "name": "BaseBdev2", 00:07:26.190 "uuid": "1a78f91f-d481-45ba-8dc8-506b43e6cd10", 00:07:26.190 "is_configured": true, 00:07:26.190 "data_offset": 0, 00:07:26.190 "data_size": 65536 00:07:26.190 } 00:07:26.190 ] 00:07:26.190 }' 00:07:26.190 23:25:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:26.190 23:25:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:26.450 23:25:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:07:26.450 23:25:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:26.450 23:25:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:26.450 23:25:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:26.450 23:25:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:26.450 23:25:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:07:26.450 23:25:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:26.450 23:25:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:07:26.450 23:25:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:26.450 23:25:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:07:26.450 23:25:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:26.450 23:25:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:26.450 [2024-09-30 23:25:06.279634] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:26.450 [2024-09-30 23:25:06.279692] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline 00:07:26.450 23:25:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:26.450 23:25:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:07:26.450 23:25:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:26.450 23:25:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:26.450 23:25:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:26.450 23:25:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:26.450 23:25:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:07:26.709 23:25:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:26.709 23:25:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:07:26.709 23:25:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:07:26.709 23:25:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:07:26.709 23:25:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 73102 00:07:26.709 23:25:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 73102 ']' 00:07:26.709 23:25:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 73102 00:07:26.709 23:25:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:07:26.709 23:25:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:26.709 23:25:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73102 00:07:26.709 killing process with pid 73102 00:07:26.709 23:25:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:26.709 23:25:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:26.709 23:25:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73102' 00:07:26.709 23:25:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 73102 00:07:26.709 [2024-09-30 23:25:06.371486] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:26.709 23:25:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 73102 00:07:26.709 [2024-09-30 23:25:06.372491] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:26.969 23:25:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:07:26.969 00:07:26.969 real 0m3.825s 00:07:26.969 user 0m5.957s 00:07:26.969 sys 0m0.784s 00:07:26.969 23:25:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:26.969 23:25:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:26.969 ************************************ 00:07:26.969 END TEST raid_state_function_test 00:07:26.969 ************************************ 00:07:26.969 23:25:06 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 2 true 00:07:26.969 23:25:06 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:07:26.969 23:25:06 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:26.969 23:25:06 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:26.969 ************************************ 00:07:26.969 START TEST raid_state_function_test_sb 00:07:26.969 ************************************ 00:07:26.969 23:25:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test concat 2 true 00:07:26.969 23:25:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:07:26.969 23:25:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:07:26.969 23:25:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:07:26.969 23:25:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:07:26.969 23:25:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:07:26.969 23:25:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:26.969 23:25:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:07:26.969 23:25:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:26.969 23:25:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:26.969 23:25:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:07:26.969 23:25:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:26.969 23:25:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:26.969 23:25:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:26.969 23:25:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:07:26.969 23:25:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:07:26.969 23:25:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:07:26.969 23:25:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:07:26.969 23:25:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:07:26.969 23:25:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:07:26.969 23:25:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:07:26.969 23:25:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:07:26.969 Process raid pid: 73339 00:07:26.969 23:25:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:07:26.969 23:25:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:07:26.969 23:25:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=73339 00:07:26.969 23:25:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:26.969 23:25:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 73339' 00:07:26.969 23:25:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 73339 00:07:26.969 23:25:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 73339 ']' 00:07:26.969 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:26.969 23:25:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:26.969 23:25:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:26.969 23:25:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:26.969 23:25:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:26.969 23:25:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:26.969 [2024-09-30 23:25:06.791303] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:07:26.969 [2024-09-30 23:25:06.791512] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:27.229 [2024-09-30 23:25:06.956557] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:27.229 [2024-09-30 23:25:07.001647] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:27.229 [2024-09-30 23:25:07.044183] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:27.229 [2024-09-30 23:25:07.044223] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:27.797 23:25:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:27.797 23:25:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:07:27.797 23:25:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:27.797 23:25:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:27.797 23:25:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:27.797 [2024-09-30 23:25:07.633483] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:27.797 [2024-09-30 23:25:07.633533] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:27.797 [2024-09-30 23:25:07.633545] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:27.797 [2024-09-30 23:25:07.633570] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:27.797 23:25:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:27.797 23:25:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:07:27.797 23:25:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:27.797 23:25:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:27.797 23:25:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:27.797 23:25:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:27.797 23:25:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:27.797 23:25:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:27.797 23:25:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:27.797 23:25:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:27.797 23:25:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:27.797 23:25:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:27.797 23:25:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:27.797 23:25:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:27.797 23:25:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:28.056 23:25:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:28.056 23:25:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:28.056 "name": "Existed_Raid", 00:07:28.056 "uuid": "e26d9443-a78d-4dec-a2fa-7f88444905fe", 00:07:28.056 "strip_size_kb": 64, 00:07:28.056 "state": "configuring", 00:07:28.056 "raid_level": "concat", 00:07:28.056 "superblock": true, 00:07:28.056 "num_base_bdevs": 2, 00:07:28.056 "num_base_bdevs_discovered": 0, 00:07:28.056 "num_base_bdevs_operational": 2, 00:07:28.056 "base_bdevs_list": [ 00:07:28.056 { 00:07:28.056 "name": "BaseBdev1", 00:07:28.056 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:28.056 "is_configured": false, 00:07:28.056 "data_offset": 0, 00:07:28.056 "data_size": 0 00:07:28.056 }, 00:07:28.056 { 00:07:28.056 "name": "BaseBdev2", 00:07:28.056 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:28.056 "is_configured": false, 00:07:28.056 "data_offset": 0, 00:07:28.056 "data_size": 0 00:07:28.056 } 00:07:28.056 ] 00:07:28.056 }' 00:07:28.056 23:25:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:28.056 23:25:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:28.316 23:25:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:28.316 23:25:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:28.316 23:25:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:28.316 [2024-09-30 23:25:08.096699] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:28.316 [2024-09-30 23:25:08.096745] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring 00:07:28.316 23:25:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:28.316 23:25:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:28.316 23:25:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:28.316 23:25:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:28.316 [2024-09-30 23:25:08.108719] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:28.316 [2024-09-30 23:25:08.108761] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:28.316 [2024-09-30 23:25:08.108768] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:28.316 [2024-09-30 23:25:08.108794] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:28.316 23:25:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:28.316 23:25:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:07:28.316 23:25:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:28.316 23:25:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:28.316 [2024-09-30 23:25:08.129552] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:28.316 BaseBdev1 00:07:28.316 23:25:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:28.316 23:25:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:07:28.316 23:25:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:07:28.316 23:25:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:07:28.316 23:25:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:07:28.316 23:25:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:07:28.316 23:25:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:07:28.316 23:25:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:07:28.316 23:25:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:28.316 23:25:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:28.316 23:25:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:28.316 23:25:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:28.316 23:25:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:28.316 23:25:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:28.316 [ 00:07:28.316 { 00:07:28.316 "name": "BaseBdev1", 00:07:28.316 "aliases": [ 00:07:28.316 "7bee4a5c-84b7-4719-8928-ca8c9d0cb0d5" 00:07:28.316 ], 00:07:28.316 "product_name": "Malloc disk", 00:07:28.316 "block_size": 512, 00:07:28.316 "num_blocks": 65536, 00:07:28.316 "uuid": "7bee4a5c-84b7-4719-8928-ca8c9d0cb0d5", 00:07:28.316 "assigned_rate_limits": { 00:07:28.316 "rw_ios_per_sec": 0, 00:07:28.316 "rw_mbytes_per_sec": 0, 00:07:28.316 "r_mbytes_per_sec": 0, 00:07:28.316 "w_mbytes_per_sec": 0 00:07:28.316 }, 00:07:28.316 "claimed": true, 00:07:28.316 "claim_type": "exclusive_write", 00:07:28.316 "zoned": false, 00:07:28.316 "supported_io_types": { 00:07:28.316 "read": true, 00:07:28.316 "write": true, 00:07:28.316 "unmap": true, 00:07:28.316 "flush": true, 00:07:28.316 "reset": true, 00:07:28.316 "nvme_admin": false, 00:07:28.316 "nvme_io": false, 00:07:28.316 "nvme_io_md": false, 00:07:28.316 "write_zeroes": true, 00:07:28.316 "zcopy": true, 00:07:28.316 "get_zone_info": false, 00:07:28.316 "zone_management": false, 00:07:28.316 "zone_append": false, 00:07:28.316 "compare": false, 00:07:28.316 "compare_and_write": false, 00:07:28.316 "abort": true, 00:07:28.316 "seek_hole": false, 00:07:28.316 "seek_data": false, 00:07:28.316 "copy": true, 00:07:28.316 "nvme_iov_md": false 00:07:28.316 }, 00:07:28.316 "memory_domains": [ 00:07:28.316 { 00:07:28.316 "dma_device_id": "system", 00:07:28.316 "dma_device_type": 1 00:07:28.316 }, 00:07:28.316 { 00:07:28.316 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:28.316 "dma_device_type": 2 00:07:28.316 } 00:07:28.316 ], 00:07:28.316 "driver_specific": {} 00:07:28.316 } 00:07:28.316 ] 00:07:28.576 23:25:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:28.576 23:25:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:07:28.576 23:25:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:07:28.576 23:25:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:28.576 23:25:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:28.576 23:25:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:28.576 23:25:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:28.576 23:25:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:28.576 23:25:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:28.576 23:25:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:28.576 23:25:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:28.576 23:25:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:28.576 23:25:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:28.576 23:25:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:28.576 23:25:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:28.576 23:25:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:28.576 23:25:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:28.576 23:25:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:28.576 "name": "Existed_Raid", 00:07:28.576 "uuid": "4d3ed7c1-1e2f-4e70-a1c6-465460b35976", 00:07:28.576 "strip_size_kb": 64, 00:07:28.576 "state": "configuring", 00:07:28.576 "raid_level": "concat", 00:07:28.576 "superblock": true, 00:07:28.576 "num_base_bdevs": 2, 00:07:28.576 "num_base_bdevs_discovered": 1, 00:07:28.576 "num_base_bdevs_operational": 2, 00:07:28.576 "base_bdevs_list": [ 00:07:28.576 { 00:07:28.576 "name": "BaseBdev1", 00:07:28.576 "uuid": "7bee4a5c-84b7-4719-8928-ca8c9d0cb0d5", 00:07:28.576 "is_configured": true, 00:07:28.576 "data_offset": 2048, 00:07:28.576 "data_size": 63488 00:07:28.576 }, 00:07:28.576 { 00:07:28.576 "name": "BaseBdev2", 00:07:28.576 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:28.576 "is_configured": false, 00:07:28.577 "data_offset": 0, 00:07:28.577 "data_size": 0 00:07:28.577 } 00:07:28.577 ] 00:07:28.577 }' 00:07:28.577 23:25:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:28.577 23:25:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:28.835 23:25:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:28.835 23:25:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:28.835 23:25:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:28.835 [2024-09-30 23:25:08.600798] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:28.835 [2024-09-30 23:25:08.600854] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring 00:07:28.835 23:25:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:28.835 23:25:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:28.835 23:25:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:28.835 23:25:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:28.835 [2024-09-30 23:25:08.612817] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:28.835 [2024-09-30 23:25:08.614649] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:28.835 [2024-09-30 23:25:08.614694] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:28.835 23:25:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:28.835 23:25:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:07:28.835 23:25:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:28.835 23:25:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:07:28.835 23:25:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:28.835 23:25:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:28.835 23:25:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:28.835 23:25:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:28.835 23:25:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:28.835 23:25:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:28.835 23:25:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:28.835 23:25:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:28.835 23:25:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:28.835 23:25:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:28.835 23:25:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:28.835 23:25:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:28.835 23:25:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:28.835 23:25:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:28.835 23:25:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:28.835 "name": "Existed_Raid", 00:07:28.835 "uuid": "2bfc96a0-15bf-4af3-9738-cc172990b165", 00:07:28.835 "strip_size_kb": 64, 00:07:28.835 "state": "configuring", 00:07:28.835 "raid_level": "concat", 00:07:28.835 "superblock": true, 00:07:28.835 "num_base_bdevs": 2, 00:07:28.835 "num_base_bdevs_discovered": 1, 00:07:28.835 "num_base_bdevs_operational": 2, 00:07:28.835 "base_bdevs_list": [ 00:07:28.835 { 00:07:28.835 "name": "BaseBdev1", 00:07:28.835 "uuid": "7bee4a5c-84b7-4719-8928-ca8c9d0cb0d5", 00:07:28.835 "is_configured": true, 00:07:28.835 "data_offset": 2048, 00:07:28.835 "data_size": 63488 00:07:28.835 }, 00:07:28.835 { 00:07:28.835 "name": "BaseBdev2", 00:07:28.835 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:28.835 "is_configured": false, 00:07:28.835 "data_offset": 0, 00:07:28.835 "data_size": 0 00:07:28.835 } 00:07:28.835 ] 00:07:28.835 }' 00:07:28.835 23:25:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:28.836 23:25:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:29.407 23:25:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:07:29.407 23:25:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:29.407 23:25:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:29.407 [2024-09-30 23:25:09.036708] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:29.407 [2024-09-30 23:25:09.036939] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:07:29.407 [2024-09-30 23:25:09.036956] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:29.407 BaseBdev2 00:07:29.407 [2024-09-30 23:25:09.037293] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:07:29.407 [2024-09-30 23:25:09.037449] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:07:29.407 [2024-09-30 23:25:09.037467] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980 00:07:29.407 [2024-09-30 23:25:09.037589] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:29.407 23:25:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:29.407 23:25:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:07:29.407 23:25:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:07:29.407 23:25:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:07:29.407 23:25:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:07:29.407 23:25:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:07:29.407 23:25:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:07:29.407 23:25:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:07:29.407 23:25:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:29.407 23:25:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:29.407 23:25:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:29.407 23:25:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:29.407 23:25:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:29.407 23:25:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:29.407 [ 00:07:29.407 { 00:07:29.407 "name": "BaseBdev2", 00:07:29.407 "aliases": [ 00:07:29.407 "aa28bae5-e3d3-4e0f-8357-d02a8c59d75d" 00:07:29.407 ], 00:07:29.407 "product_name": "Malloc disk", 00:07:29.407 "block_size": 512, 00:07:29.407 "num_blocks": 65536, 00:07:29.407 "uuid": "aa28bae5-e3d3-4e0f-8357-d02a8c59d75d", 00:07:29.407 "assigned_rate_limits": { 00:07:29.407 "rw_ios_per_sec": 0, 00:07:29.407 "rw_mbytes_per_sec": 0, 00:07:29.407 "r_mbytes_per_sec": 0, 00:07:29.407 "w_mbytes_per_sec": 0 00:07:29.407 }, 00:07:29.407 "claimed": true, 00:07:29.407 "claim_type": "exclusive_write", 00:07:29.407 "zoned": false, 00:07:29.407 "supported_io_types": { 00:07:29.407 "read": true, 00:07:29.407 "write": true, 00:07:29.407 "unmap": true, 00:07:29.407 "flush": true, 00:07:29.407 "reset": true, 00:07:29.407 "nvme_admin": false, 00:07:29.407 "nvme_io": false, 00:07:29.407 "nvme_io_md": false, 00:07:29.407 "write_zeroes": true, 00:07:29.407 "zcopy": true, 00:07:29.407 "get_zone_info": false, 00:07:29.407 "zone_management": false, 00:07:29.407 "zone_append": false, 00:07:29.407 "compare": false, 00:07:29.407 "compare_and_write": false, 00:07:29.407 "abort": true, 00:07:29.407 "seek_hole": false, 00:07:29.407 "seek_data": false, 00:07:29.407 "copy": true, 00:07:29.407 "nvme_iov_md": false 00:07:29.407 }, 00:07:29.407 "memory_domains": [ 00:07:29.407 { 00:07:29.407 "dma_device_id": "system", 00:07:29.407 "dma_device_type": 1 00:07:29.407 }, 00:07:29.407 { 00:07:29.407 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:29.407 "dma_device_type": 2 00:07:29.407 } 00:07:29.407 ], 00:07:29.407 "driver_specific": {} 00:07:29.407 } 00:07:29.407 ] 00:07:29.407 23:25:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:29.407 23:25:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:07:29.407 23:25:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:07:29.407 23:25:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:29.407 23:25:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:07:29.407 23:25:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:29.407 23:25:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:29.407 23:25:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:29.407 23:25:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:29.407 23:25:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:29.407 23:25:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:29.407 23:25:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:29.407 23:25:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:29.407 23:25:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:29.407 23:25:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:29.407 23:25:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:29.407 23:25:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:29.407 23:25:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:29.407 23:25:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:29.407 23:25:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:29.407 "name": "Existed_Raid", 00:07:29.407 "uuid": "2bfc96a0-15bf-4af3-9738-cc172990b165", 00:07:29.407 "strip_size_kb": 64, 00:07:29.407 "state": "online", 00:07:29.407 "raid_level": "concat", 00:07:29.407 "superblock": true, 00:07:29.407 "num_base_bdevs": 2, 00:07:29.407 "num_base_bdevs_discovered": 2, 00:07:29.407 "num_base_bdevs_operational": 2, 00:07:29.407 "base_bdevs_list": [ 00:07:29.407 { 00:07:29.407 "name": "BaseBdev1", 00:07:29.407 "uuid": "7bee4a5c-84b7-4719-8928-ca8c9d0cb0d5", 00:07:29.407 "is_configured": true, 00:07:29.407 "data_offset": 2048, 00:07:29.407 "data_size": 63488 00:07:29.407 }, 00:07:29.407 { 00:07:29.407 "name": "BaseBdev2", 00:07:29.407 "uuid": "aa28bae5-e3d3-4e0f-8357-d02a8c59d75d", 00:07:29.407 "is_configured": true, 00:07:29.407 "data_offset": 2048, 00:07:29.407 "data_size": 63488 00:07:29.407 } 00:07:29.407 ] 00:07:29.407 }' 00:07:29.407 23:25:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:29.407 23:25:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:29.666 23:25:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:07:29.666 23:25:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:07:29.666 23:25:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:29.666 23:25:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:29.666 23:25:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:07:29.666 23:25:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:29.666 23:25:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:29.666 23:25:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:07:29.666 23:25:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:29.666 23:25:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:29.666 [2024-09-30 23:25:09.496252] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:29.666 23:25:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:29.666 23:25:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:29.666 "name": "Existed_Raid", 00:07:29.666 "aliases": [ 00:07:29.666 "2bfc96a0-15bf-4af3-9738-cc172990b165" 00:07:29.666 ], 00:07:29.666 "product_name": "Raid Volume", 00:07:29.666 "block_size": 512, 00:07:29.666 "num_blocks": 126976, 00:07:29.666 "uuid": "2bfc96a0-15bf-4af3-9738-cc172990b165", 00:07:29.666 "assigned_rate_limits": { 00:07:29.666 "rw_ios_per_sec": 0, 00:07:29.666 "rw_mbytes_per_sec": 0, 00:07:29.666 "r_mbytes_per_sec": 0, 00:07:29.666 "w_mbytes_per_sec": 0 00:07:29.666 }, 00:07:29.666 "claimed": false, 00:07:29.666 "zoned": false, 00:07:29.666 "supported_io_types": { 00:07:29.666 "read": true, 00:07:29.666 "write": true, 00:07:29.666 "unmap": true, 00:07:29.666 "flush": true, 00:07:29.666 "reset": true, 00:07:29.666 "nvme_admin": false, 00:07:29.666 "nvme_io": false, 00:07:29.666 "nvme_io_md": false, 00:07:29.666 "write_zeroes": true, 00:07:29.666 "zcopy": false, 00:07:29.666 "get_zone_info": false, 00:07:29.666 "zone_management": false, 00:07:29.666 "zone_append": false, 00:07:29.666 "compare": false, 00:07:29.666 "compare_and_write": false, 00:07:29.666 "abort": false, 00:07:29.666 "seek_hole": false, 00:07:29.666 "seek_data": false, 00:07:29.666 "copy": false, 00:07:29.666 "nvme_iov_md": false 00:07:29.666 }, 00:07:29.666 "memory_domains": [ 00:07:29.666 { 00:07:29.666 "dma_device_id": "system", 00:07:29.666 "dma_device_type": 1 00:07:29.666 }, 00:07:29.666 { 00:07:29.666 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:29.666 "dma_device_type": 2 00:07:29.666 }, 00:07:29.666 { 00:07:29.666 "dma_device_id": "system", 00:07:29.666 "dma_device_type": 1 00:07:29.666 }, 00:07:29.666 { 00:07:29.666 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:29.666 "dma_device_type": 2 00:07:29.666 } 00:07:29.666 ], 00:07:29.666 "driver_specific": { 00:07:29.666 "raid": { 00:07:29.666 "uuid": "2bfc96a0-15bf-4af3-9738-cc172990b165", 00:07:29.666 "strip_size_kb": 64, 00:07:29.666 "state": "online", 00:07:29.666 "raid_level": "concat", 00:07:29.666 "superblock": true, 00:07:29.666 "num_base_bdevs": 2, 00:07:29.666 "num_base_bdevs_discovered": 2, 00:07:29.666 "num_base_bdevs_operational": 2, 00:07:29.666 "base_bdevs_list": [ 00:07:29.666 { 00:07:29.666 "name": "BaseBdev1", 00:07:29.666 "uuid": "7bee4a5c-84b7-4719-8928-ca8c9d0cb0d5", 00:07:29.666 "is_configured": true, 00:07:29.666 "data_offset": 2048, 00:07:29.666 "data_size": 63488 00:07:29.666 }, 00:07:29.666 { 00:07:29.666 "name": "BaseBdev2", 00:07:29.666 "uuid": "aa28bae5-e3d3-4e0f-8357-d02a8c59d75d", 00:07:29.666 "is_configured": true, 00:07:29.666 "data_offset": 2048, 00:07:29.666 "data_size": 63488 00:07:29.666 } 00:07:29.666 ] 00:07:29.666 } 00:07:29.666 } 00:07:29.666 }' 00:07:29.925 23:25:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:29.925 23:25:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:07:29.925 BaseBdev2' 00:07:29.925 23:25:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:29.925 23:25:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:29.925 23:25:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:29.925 23:25:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:07:29.925 23:25:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:29.925 23:25:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:29.925 23:25:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:29.925 23:25:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:29.925 23:25:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:29.925 23:25:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:29.925 23:25:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:29.925 23:25:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:07:29.925 23:25:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:29.925 23:25:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:29.925 23:25:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:29.925 23:25:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:29.925 23:25:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:29.926 23:25:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:29.926 23:25:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:07:29.926 23:25:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:29.926 23:25:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:29.926 [2024-09-30 23:25:09.689525] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:29.926 [2024-09-30 23:25:09.689566] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:29.926 [2024-09-30 23:25:09.689628] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:29.926 23:25:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:29.926 23:25:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:07:29.926 23:25:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:07:29.926 23:25:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:29.926 23:25:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:07:29.926 23:25:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:07:29.926 23:25:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:07:29.926 23:25:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:29.926 23:25:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:07:29.926 23:25:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:29.926 23:25:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:29.926 23:25:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:29.926 23:25:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:29.926 23:25:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:29.926 23:25:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:29.926 23:25:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:29.926 23:25:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:29.926 23:25:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:29.926 23:25:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:29.926 23:25:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:29.926 23:25:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:29.926 23:25:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:29.926 "name": "Existed_Raid", 00:07:29.926 "uuid": "2bfc96a0-15bf-4af3-9738-cc172990b165", 00:07:29.926 "strip_size_kb": 64, 00:07:29.926 "state": "offline", 00:07:29.926 "raid_level": "concat", 00:07:29.926 "superblock": true, 00:07:29.926 "num_base_bdevs": 2, 00:07:29.926 "num_base_bdevs_discovered": 1, 00:07:29.926 "num_base_bdevs_operational": 1, 00:07:29.926 "base_bdevs_list": [ 00:07:29.926 { 00:07:29.926 "name": null, 00:07:29.926 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:29.926 "is_configured": false, 00:07:29.926 "data_offset": 0, 00:07:29.926 "data_size": 63488 00:07:29.926 }, 00:07:29.926 { 00:07:29.926 "name": "BaseBdev2", 00:07:29.926 "uuid": "aa28bae5-e3d3-4e0f-8357-d02a8c59d75d", 00:07:29.926 "is_configured": true, 00:07:29.926 "data_offset": 2048, 00:07:29.926 "data_size": 63488 00:07:29.926 } 00:07:29.926 ] 00:07:29.926 }' 00:07:29.926 23:25:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:29.926 23:25:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:30.493 23:25:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:07:30.493 23:25:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:30.493 23:25:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:30.493 23:25:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:30.493 23:25:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:30.493 23:25:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:07:30.493 23:25:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:30.493 23:25:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:07:30.493 23:25:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:30.493 23:25:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:07:30.493 23:25:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:30.493 23:25:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:30.493 [2024-09-30 23:25:10.219779] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:30.493 [2024-09-30 23:25:10.219846] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline 00:07:30.493 23:25:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:30.493 23:25:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:07:30.493 23:25:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:30.493 23:25:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:30.493 23:25:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:07:30.493 23:25:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:30.493 23:25:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:30.493 23:25:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:30.493 23:25:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:07:30.493 23:25:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:07:30.493 23:25:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:07:30.493 23:25:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 73339 00:07:30.493 23:25:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 73339 ']' 00:07:30.493 23:25:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 73339 00:07:30.493 23:25:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:07:30.493 23:25:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:30.493 23:25:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73339 00:07:30.493 killing process with pid 73339 00:07:30.493 23:25:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:30.493 23:25:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:30.493 23:25:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73339' 00:07:30.493 23:25:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 73339 00:07:30.493 [2024-09-30 23:25:10.315748] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:30.493 23:25:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 73339 00:07:30.493 [2024-09-30 23:25:10.316735] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:30.752 23:25:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:07:30.752 00:07:30.752 real 0m3.868s 00:07:30.752 user 0m6.004s 00:07:30.752 sys 0m0.833s 00:07:30.752 23:25:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:30.752 23:25:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:30.752 ************************************ 00:07:30.752 END TEST raid_state_function_test_sb 00:07:30.752 ************************************ 00:07:31.018 23:25:10 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 2 00:07:31.018 23:25:10 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:07:31.018 23:25:10 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:31.018 23:25:10 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:31.018 ************************************ 00:07:31.018 START TEST raid_superblock_test 00:07:31.018 ************************************ 00:07:31.018 23:25:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test concat 2 00:07:31.018 23:25:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:07:31.018 23:25:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:07:31.018 23:25:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:07:31.018 23:25:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:07:31.018 23:25:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:07:31.018 23:25:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:07:31.018 23:25:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:07:31.018 23:25:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:07:31.018 23:25:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:07:31.018 23:25:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:07:31.018 23:25:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:07:31.018 23:25:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:07:31.018 23:25:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:07:31.018 23:25:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:07:31.018 23:25:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:07:31.018 23:25:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:07:31.018 23:25:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=73580 00:07:31.018 23:25:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:07:31.018 23:25:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 73580 00:07:31.018 23:25:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 73580 ']' 00:07:31.018 23:25:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:31.018 23:25:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:31.018 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:31.018 23:25:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:31.018 23:25:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:31.018 23:25:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:31.018 [2024-09-30 23:25:10.724548] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:07:31.018 [2024-09-30 23:25:10.724685] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73580 ] 00:07:31.310 [2024-09-30 23:25:10.889841] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:31.310 [2024-09-30 23:25:10.934329] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:31.310 [2024-09-30 23:25:10.976382] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:31.310 [2024-09-30 23:25:10.976425] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:31.891 23:25:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:31.891 23:25:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:07:31.891 23:25:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:07:31.891 23:25:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:31.891 23:25:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:07:31.891 23:25:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:07:31.891 23:25:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:07:31.891 23:25:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:07:31.891 23:25:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:07:31.891 23:25:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:07:31.891 23:25:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:07:31.891 23:25:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:31.891 23:25:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:31.891 malloc1 00:07:31.891 23:25:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:31.891 23:25:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:31.891 23:25:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:31.891 23:25:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:31.891 [2024-09-30 23:25:11.562834] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:31.891 [2024-09-30 23:25:11.562934] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:31.891 [2024-09-30 23:25:11.562956] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:07:31.891 [2024-09-30 23:25:11.562976] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:31.891 [2024-09-30 23:25:11.565217] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:31.891 [2024-09-30 23:25:11.565254] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:31.891 pt1 00:07:31.891 23:25:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:31.891 23:25:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:07:31.891 23:25:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:31.891 23:25:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:07:31.891 23:25:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:07:31.891 23:25:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:07:31.891 23:25:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:07:31.891 23:25:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:07:31.891 23:25:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:07:31.891 23:25:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:07:31.891 23:25:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:31.891 23:25:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:31.891 malloc2 00:07:31.891 23:25:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:31.891 23:25:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:31.891 23:25:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:31.891 23:25:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:31.891 [2024-09-30 23:25:11.608554] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:31.891 [2024-09-30 23:25:11.608667] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:31.891 [2024-09-30 23:25:11.608705] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:07:31.891 [2024-09-30 23:25:11.608730] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:31.891 [2024-09-30 23:25:11.613390] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:31.891 [2024-09-30 23:25:11.613447] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:31.891 pt2 00:07:31.891 23:25:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:31.891 23:25:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:07:31.891 23:25:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:31.891 23:25:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:07:31.891 23:25:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:31.891 23:25:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:31.891 [2024-09-30 23:25:11.621711] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:31.891 [2024-09-30 23:25:11.624172] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:31.891 [2024-09-30 23:25:11.624327] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:07:31.891 [2024-09-30 23:25:11.624344] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:31.891 [2024-09-30 23:25:11.624623] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:07:31.891 [2024-09-30 23:25:11.624763] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:07:31.891 [2024-09-30 23:25:11.624782] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:07:31.891 [2024-09-30 23:25:11.624944] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:31.891 23:25:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:31.891 23:25:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:07:31.891 23:25:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:31.891 23:25:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:31.891 23:25:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:31.891 23:25:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:31.891 23:25:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:31.891 23:25:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:31.891 23:25:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:31.891 23:25:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:31.891 23:25:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:31.891 23:25:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:31.891 23:25:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:31.891 23:25:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:31.891 23:25:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:31.891 23:25:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:31.891 23:25:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:31.891 "name": "raid_bdev1", 00:07:31.891 "uuid": "586110d0-4f67-467e-a05b-befbdc0e5c1e", 00:07:31.891 "strip_size_kb": 64, 00:07:31.891 "state": "online", 00:07:31.891 "raid_level": "concat", 00:07:31.891 "superblock": true, 00:07:31.891 "num_base_bdevs": 2, 00:07:31.891 "num_base_bdevs_discovered": 2, 00:07:31.891 "num_base_bdevs_operational": 2, 00:07:31.891 "base_bdevs_list": [ 00:07:31.891 { 00:07:31.891 "name": "pt1", 00:07:31.891 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:31.891 "is_configured": true, 00:07:31.891 "data_offset": 2048, 00:07:31.891 "data_size": 63488 00:07:31.891 }, 00:07:31.891 { 00:07:31.891 "name": "pt2", 00:07:31.891 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:31.891 "is_configured": true, 00:07:31.891 "data_offset": 2048, 00:07:31.891 "data_size": 63488 00:07:31.892 } 00:07:31.892 ] 00:07:31.892 }' 00:07:31.892 23:25:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:31.892 23:25:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.460 23:25:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:07:32.460 23:25:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:07:32.460 23:25:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:32.460 23:25:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:32.460 23:25:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:32.460 23:25:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:32.460 23:25:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:32.460 23:25:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:32.460 23:25:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.460 23:25:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:32.460 [2024-09-30 23:25:12.069109] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:32.460 23:25:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:32.460 23:25:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:32.460 "name": "raid_bdev1", 00:07:32.460 "aliases": [ 00:07:32.460 "586110d0-4f67-467e-a05b-befbdc0e5c1e" 00:07:32.460 ], 00:07:32.460 "product_name": "Raid Volume", 00:07:32.460 "block_size": 512, 00:07:32.460 "num_blocks": 126976, 00:07:32.460 "uuid": "586110d0-4f67-467e-a05b-befbdc0e5c1e", 00:07:32.460 "assigned_rate_limits": { 00:07:32.460 "rw_ios_per_sec": 0, 00:07:32.460 "rw_mbytes_per_sec": 0, 00:07:32.460 "r_mbytes_per_sec": 0, 00:07:32.460 "w_mbytes_per_sec": 0 00:07:32.460 }, 00:07:32.460 "claimed": false, 00:07:32.460 "zoned": false, 00:07:32.460 "supported_io_types": { 00:07:32.460 "read": true, 00:07:32.460 "write": true, 00:07:32.460 "unmap": true, 00:07:32.460 "flush": true, 00:07:32.460 "reset": true, 00:07:32.460 "nvme_admin": false, 00:07:32.460 "nvme_io": false, 00:07:32.460 "nvme_io_md": false, 00:07:32.460 "write_zeroes": true, 00:07:32.460 "zcopy": false, 00:07:32.460 "get_zone_info": false, 00:07:32.460 "zone_management": false, 00:07:32.460 "zone_append": false, 00:07:32.460 "compare": false, 00:07:32.460 "compare_and_write": false, 00:07:32.460 "abort": false, 00:07:32.460 "seek_hole": false, 00:07:32.460 "seek_data": false, 00:07:32.460 "copy": false, 00:07:32.460 "nvme_iov_md": false 00:07:32.460 }, 00:07:32.460 "memory_domains": [ 00:07:32.460 { 00:07:32.460 "dma_device_id": "system", 00:07:32.460 "dma_device_type": 1 00:07:32.460 }, 00:07:32.460 { 00:07:32.460 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:32.460 "dma_device_type": 2 00:07:32.460 }, 00:07:32.460 { 00:07:32.460 "dma_device_id": "system", 00:07:32.460 "dma_device_type": 1 00:07:32.460 }, 00:07:32.460 { 00:07:32.460 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:32.460 "dma_device_type": 2 00:07:32.460 } 00:07:32.460 ], 00:07:32.460 "driver_specific": { 00:07:32.460 "raid": { 00:07:32.460 "uuid": "586110d0-4f67-467e-a05b-befbdc0e5c1e", 00:07:32.460 "strip_size_kb": 64, 00:07:32.460 "state": "online", 00:07:32.460 "raid_level": "concat", 00:07:32.460 "superblock": true, 00:07:32.460 "num_base_bdevs": 2, 00:07:32.460 "num_base_bdevs_discovered": 2, 00:07:32.460 "num_base_bdevs_operational": 2, 00:07:32.460 "base_bdevs_list": [ 00:07:32.460 { 00:07:32.460 "name": "pt1", 00:07:32.460 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:32.460 "is_configured": true, 00:07:32.460 "data_offset": 2048, 00:07:32.460 "data_size": 63488 00:07:32.460 }, 00:07:32.460 { 00:07:32.460 "name": "pt2", 00:07:32.460 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:32.460 "is_configured": true, 00:07:32.460 "data_offset": 2048, 00:07:32.460 "data_size": 63488 00:07:32.460 } 00:07:32.460 ] 00:07:32.460 } 00:07:32.460 } 00:07:32.460 }' 00:07:32.460 23:25:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:32.460 23:25:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:07:32.460 pt2' 00:07:32.460 23:25:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:32.460 23:25:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:32.460 23:25:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:32.460 23:25:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:07:32.460 23:25:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:32.460 23:25:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.460 23:25:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:32.460 23:25:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:32.460 23:25:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:32.460 23:25:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:32.460 23:25:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:32.460 23:25:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:07:32.460 23:25:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:32.460 23:25:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:32.460 23:25:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.460 23:25:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:32.460 23:25:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:32.460 23:25:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:32.460 23:25:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:32.460 23:25:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:32.460 23:25:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.460 23:25:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:07:32.460 [2024-09-30 23:25:12.292645] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:32.460 23:25:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:32.721 23:25:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=586110d0-4f67-467e-a05b-befbdc0e5c1e 00:07:32.721 23:25:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 586110d0-4f67-467e-a05b-befbdc0e5c1e ']' 00:07:32.721 23:25:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:32.721 23:25:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:32.721 23:25:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.721 [2024-09-30 23:25:12.340328] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:32.721 [2024-09-30 23:25:12.340365] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:32.721 [2024-09-30 23:25:12.340433] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:32.721 [2024-09-30 23:25:12.340487] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:32.721 [2024-09-30 23:25:12.340506] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:07:32.721 23:25:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:32.721 23:25:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:32.721 23:25:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:32.721 23:25:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.721 23:25:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:07:32.721 23:25:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:32.721 23:25:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:07:32.721 23:25:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:07:32.721 23:25:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:07:32.721 23:25:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:07:32.721 23:25:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:32.721 23:25:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.721 23:25:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:32.721 23:25:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:07:32.721 23:25:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:07:32.721 23:25:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:32.721 23:25:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.721 23:25:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:32.721 23:25:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:07:32.721 23:25:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:07:32.721 23:25:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:32.721 23:25:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.721 23:25:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:32.721 23:25:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:07:32.721 23:25:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:32.721 23:25:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:07:32.721 23:25:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:32.721 23:25:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:07:32.721 23:25:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:32.721 23:25:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:07:32.721 23:25:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:32.721 23:25:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:32.721 23:25:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:32.721 23:25:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.721 [2024-09-30 23:25:12.476127] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:07:32.721 [2024-09-30 23:25:12.477966] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:07:32.721 [2024-09-30 23:25:12.478044] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:07:32.721 [2024-09-30 23:25:12.478093] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:07:32.721 [2024-09-30 23:25:12.478111] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:32.721 [2024-09-30 23:25:12.478120] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state configuring 00:07:32.721 request: 00:07:32.721 { 00:07:32.721 "name": "raid_bdev1", 00:07:32.721 "raid_level": "concat", 00:07:32.721 "base_bdevs": [ 00:07:32.721 "malloc1", 00:07:32.721 "malloc2" 00:07:32.721 ], 00:07:32.721 "strip_size_kb": 64, 00:07:32.721 "superblock": false, 00:07:32.721 "method": "bdev_raid_create", 00:07:32.721 "req_id": 1 00:07:32.721 } 00:07:32.721 Got JSON-RPC error response 00:07:32.721 response: 00:07:32.721 { 00:07:32.721 "code": -17, 00:07:32.721 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:07:32.721 } 00:07:32.721 23:25:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:07:32.721 23:25:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:07:32.721 23:25:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:32.721 23:25:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:32.721 23:25:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:32.721 23:25:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:32.721 23:25:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:32.721 23:25:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.721 23:25:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:07:32.721 23:25:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:32.721 23:25:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:07:32.721 23:25:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:07:32.721 23:25:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:32.721 23:25:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:32.721 23:25:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.721 [2024-09-30 23:25:12.539977] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:32.721 [2024-09-30 23:25:12.540025] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:32.721 [2024-09-30 23:25:12.540041] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:07:32.721 [2024-09-30 23:25:12.540049] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:32.721 [2024-09-30 23:25:12.542066] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:32.721 [2024-09-30 23:25:12.542097] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:32.721 [2024-09-30 23:25:12.542174] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:07:32.721 [2024-09-30 23:25:12.542214] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:32.721 pt1 00:07:32.721 23:25:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:32.721 23:25:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 2 00:07:32.721 23:25:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:32.721 23:25:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:32.721 23:25:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:32.721 23:25:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:32.721 23:25:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:32.721 23:25:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:32.721 23:25:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:32.721 23:25:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:32.722 23:25:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:32.722 23:25:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:32.722 23:25:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:32.722 23:25:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:32.722 23:25:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.722 23:25:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:32.981 23:25:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:32.981 "name": "raid_bdev1", 00:07:32.981 "uuid": "586110d0-4f67-467e-a05b-befbdc0e5c1e", 00:07:32.981 "strip_size_kb": 64, 00:07:32.981 "state": "configuring", 00:07:32.981 "raid_level": "concat", 00:07:32.981 "superblock": true, 00:07:32.981 "num_base_bdevs": 2, 00:07:32.981 "num_base_bdevs_discovered": 1, 00:07:32.981 "num_base_bdevs_operational": 2, 00:07:32.981 "base_bdevs_list": [ 00:07:32.981 { 00:07:32.981 "name": "pt1", 00:07:32.981 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:32.981 "is_configured": true, 00:07:32.981 "data_offset": 2048, 00:07:32.981 "data_size": 63488 00:07:32.981 }, 00:07:32.981 { 00:07:32.981 "name": null, 00:07:32.981 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:32.981 "is_configured": false, 00:07:32.981 "data_offset": 2048, 00:07:32.981 "data_size": 63488 00:07:32.981 } 00:07:32.981 ] 00:07:32.981 }' 00:07:32.981 23:25:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:32.981 23:25:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.241 23:25:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:07:33.241 23:25:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:07:33.241 23:25:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:07:33.241 23:25:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:33.241 23:25:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:33.241 23:25:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.241 [2024-09-30 23:25:12.991287] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:33.241 [2024-09-30 23:25:12.991362] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:33.241 [2024-09-30 23:25:12.991386] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:07:33.241 [2024-09-30 23:25:12.991395] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:33.241 [2024-09-30 23:25:12.991808] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:33.241 [2024-09-30 23:25:12.991835] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:33.241 [2024-09-30 23:25:12.991925] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:07:33.241 [2024-09-30 23:25:12.991949] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:33.241 [2024-09-30 23:25:12.992050] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:07:33.241 [2024-09-30 23:25:12.992068] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:33.241 [2024-09-30 23:25:12.992292] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:07:33.241 [2024-09-30 23:25:12.992408] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:07:33.241 [2024-09-30 23:25:12.992430] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006980 00:07:33.241 [2024-09-30 23:25:12.992530] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:33.241 pt2 00:07:33.241 23:25:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:33.241 23:25:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:07:33.241 23:25:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:07:33.241 23:25:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:07:33.241 23:25:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:33.241 23:25:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:33.241 23:25:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:33.241 23:25:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:33.241 23:25:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:33.241 23:25:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:33.241 23:25:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:33.241 23:25:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:33.241 23:25:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:33.241 23:25:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:33.241 23:25:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:33.241 23:25:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:33.241 23:25:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.241 23:25:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:33.241 23:25:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:33.241 "name": "raid_bdev1", 00:07:33.241 "uuid": "586110d0-4f67-467e-a05b-befbdc0e5c1e", 00:07:33.241 "strip_size_kb": 64, 00:07:33.241 "state": "online", 00:07:33.241 "raid_level": "concat", 00:07:33.241 "superblock": true, 00:07:33.241 "num_base_bdevs": 2, 00:07:33.241 "num_base_bdevs_discovered": 2, 00:07:33.241 "num_base_bdevs_operational": 2, 00:07:33.241 "base_bdevs_list": [ 00:07:33.241 { 00:07:33.241 "name": "pt1", 00:07:33.241 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:33.241 "is_configured": true, 00:07:33.241 "data_offset": 2048, 00:07:33.241 "data_size": 63488 00:07:33.241 }, 00:07:33.241 { 00:07:33.241 "name": "pt2", 00:07:33.241 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:33.241 "is_configured": true, 00:07:33.241 "data_offset": 2048, 00:07:33.241 "data_size": 63488 00:07:33.241 } 00:07:33.241 ] 00:07:33.241 }' 00:07:33.241 23:25:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:33.241 23:25:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.810 23:25:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:07:33.810 23:25:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:07:33.810 23:25:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:33.810 23:25:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:33.810 23:25:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:33.810 23:25:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:33.810 23:25:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:33.810 23:25:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:33.810 23:25:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.810 23:25:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:33.810 [2024-09-30 23:25:13.418910] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:33.810 23:25:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:33.810 23:25:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:33.810 "name": "raid_bdev1", 00:07:33.810 "aliases": [ 00:07:33.810 "586110d0-4f67-467e-a05b-befbdc0e5c1e" 00:07:33.810 ], 00:07:33.810 "product_name": "Raid Volume", 00:07:33.810 "block_size": 512, 00:07:33.810 "num_blocks": 126976, 00:07:33.810 "uuid": "586110d0-4f67-467e-a05b-befbdc0e5c1e", 00:07:33.810 "assigned_rate_limits": { 00:07:33.810 "rw_ios_per_sec": 0, 00:07:33.810 "rw_mbytes_per_sec": 0, 00:07:33.810 "r_mbytes_per_sec": 0, 00:07:33.810 "w_mbytes_per_sec": 0 00:07:33.810 }, 00:07:33.810 "claimed": false, 00:07:33.810 "zoned": false, 00:07:33.810 "supported_io_types": { 00:07:33.810 "read": true, 00:07:33.810 "write": true, 00:07:33.810 "unmap": true, 00:07:33.810 "flush": true, 00:07:33.810 "reset": true, 00:07:33.810 "nvme_admin": false, 00:07:33.810 "nvme_io": false, 00:07:33.810 "nvme_io_md": false, 00:07:33.810 "write_zeroes": true, 00:07:33.810 "zcopy": false, 00:07:33.810 "get_zone_info": false, 00:07:33.810 "zone_management": false, 00:07:33.810 "zone_append": false, 00:07:33.810 "compare": false, 00:07:33.810 "compare_and_write": false, 00:07:33.810 "abort": false, 00:07:33.810 "seek_hole": false, 00:07:33.810 "seek_data": false, 00:07:33.810 "copy": false, 00:07:33.810 "nvme_iov_md": false 00:07:33.810 }, 00:07:33.810 "memory_domains": [ 00:07:33.810 { 00:07:33.810 "dma_device_id": "system", 00:07:33.810 "dma_device_type": 1 00:07:33.810 }, 00:07:33.810 { 00:07:33.810 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:33.810 "dma_device_type": 2 00:07:33.810 }, 00:07:33.810 { 00:07:33.810 "dma_device_id": "system", 00:07:33.810 "dma_device_type": 1 00:07:33.810 }, 00:07:33.810 { 00:07:33.810 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:33.810 "dma_device_type": 2 00:07:33.810 } 00:07:33.810 ], 00:07:33.810 "driver_specific": { 00:07:33.810 "raid": { 00:07:33.810 "uuid": "586110d0-4f67-467e-a05b-befbdc0e5c1e", 00:07:33.810 "strip_size_kb": 64, 00:07:33.810 "state": "online", 00:07:33.810 "raid_level": "concat", 00:07:33.810 "superblock": true, 00:07:33.810 "num_base_bdevs": 2, 00:07:33.810 "num_base_bdevs_discovered": 2, 00:07:33.810 "num_base_bdevs_operational": 2, 00:07:33.810 "base_bdevs_list": [ 00:07:33.810 { 00:07:33.810 "name": "pt1", 00:07:33.810 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:33.810 "is_configured": true, 00:07:33.810 "data_offset": 2048, 00:07:33.810 "data_size": 63488 00:07:33.810 }, 00:07:33.810 { 00:07:33.810 "name": "pt2", 00:07:33.810 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:33.810 "is_configured": true, 00:07:33.810 "data_offset": 2048, 00:07:33.810 "data_size": 63488 00:07:33.810 } 00:07:33.810 ] 00:07:33.810 } 00:07:33.810 } 00:07:33.810 }' 00:07:33.810 23:25:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:33.810 23:25:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:07:33.810 pt2' 00:07:33.810 23:25:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:33.811 23:25:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:33.811 23:25:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:33.811 23:25:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:07:33.811 23:25:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:33.811 23:25:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:33.811 23:25:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.811 23:25:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:33.811 23:25:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:33.811 23:25:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:33.811 23:25:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:33.811 23:25:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:33.811 23:25:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:07:33.811 23:25:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:33.811 23:25:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.811 23:25:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:33.811 23:25:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:33.811 23:25:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:33.811 23:25:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:33.811 23:25:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:33.811 23:25:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.811 23:25:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:07:33.811 [2024-09-30 23:25:13.622561] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:33.811 23:25:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:33.811 23:25:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 586110d0-4f67-467e-a05b-befbdc0e5c1e '!=' 586110d0-4f67-467e-a05b-befbdc0e5c1e ']' 00:07:33.811 23:25:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:07:33.811 23:25:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:33.811 23:25:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:33.811 23:25:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 73580 00:07:33.811 23:25:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 73580 ']' 00:07:34.071 23:25:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 73580 00:07:34.071 23:25:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:07:34.071 23:25:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:34.071 23:25:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73580 00:07:34.071 23:25:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:34.072 23:25:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:34.072 killing process with pid 73580 00:07:34.072 23:25:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73580' 00:07:34.072 23:25:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 73580 00:07:34.072 [2024-09-30 23:25:13.707968] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:34.072 [2024-09-30 23:25:13.708068] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:34.072 [2024-09-30 23:25:13.708120] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:34.072 [2024-09-30 23:25:13.708129] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name raid_bdev1, state offline 00:07:34.072 23:25:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 73580 00:07:34.072 [2024-09-30 23:25:13.730634] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:34.333 23:25:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:07:34.333 00:07:34.333 real 0m3.345s 00:07:34.333 user 0m5.112s 00:07:34.333 sys 0m0.736s 00:07:34.333 23:25:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:34.333 23:25:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:34.333 ************************************ 00:07:34.333 END TEST raid_superblock_test 00:07:34.333 ************************************ 00:07:34.333 23:25:14 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 2 read 00:07:34.333 23:25:14 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:07:34.333 23:25:14 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:34.333 23:25:14 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:34.333 ************************************ 00:07:34.333 START TEST raid_read_error_test 00:07:34.333 ************************************ 00:07:34.333 23:25:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test concat 2 read 00:07:34.333 23:25:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:07:34.333 23:25:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:07:34.333 23:25:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:07:34.333 23:25:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:07:34.333 23:25:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:34.333 23:25:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:07:34.333 23:25:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:34.333 23:25:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:34.333 23:25:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:07:34.333 23:25:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:34.333 23:25:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:34.333 23:25:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:34.333 23:25:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:07:34.333 23:25:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:07:34.333 23:25:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:07:34.333 23:25:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:07:34.333 23:25:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:07:34.333 23:25:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:07:34.333 23:25:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:07:34.333 23:25:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:07:34.333 23:25:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:07:34.333 23:25:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:07:34.333 23:25:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.iYs3T664XD 00:07:34.333 23:25:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=73775 00:07:34.333 23:25:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 73775 00:07:34.333 23:25:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:07:34.333 23:25:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 73775 ']' 00:07:34.333 23:25:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:34.333 23:25:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:34.333 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:34.333 23:25:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:34.333 23:25:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:34.333 23:25:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:34.333 [2024-09-30 23:25:14.157796] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:07:34.333 [2024-09-30 23:25:14.157955] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73775 ] 00:07:34.592 [2024-09-30 23:25:14.322587] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:34.592 [2024-09-30 23:25:14.370461] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:34.592 [2024-09-30 23:25:14.413395] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:34.592 [2024-09-30 23:25:14.413441] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:35.161 23:25:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:35.161 23:25:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:07:35.161 23:25:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:35.161 23:25:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:07:35.161 23:25:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:35.161 23:25:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:35.161 BaseBdev1_malloc 00:07:35.161 23:25:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:35.161 23:25:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:07:35.161 23:25:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:35.161 23:25:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:35.161 true 00:07:35.161 23:25:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:35.161 23:25:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:07:35.421 23:25:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:35.421 23:25:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:35.421 [2024-09-30 23:25:15.019675] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:07:35.421 [2024-09-30 23:25:15.019731] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:35.421 [2024-09-30 23:25:15.019750] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:07:35.421 [2024-09-30 23:25:15.019759] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:35.421 [2024-09-30 23:25:15.021862] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:35.421 [2024-09-30 23:25:15.021911] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:07:35.421 BaseBdev1 00:07:35.421 23:25:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:35.421 23:25:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:35.421 23:25:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:07:35.421 23:25:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:35.421 23:25:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:35.421 BaseBdev2_malloc 00:07:35.421 23:25:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:35.421 23:25:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:07:35.421 23:25:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:35.421 23:25:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:35.421 true 00:07:35.421 23:25:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:35.421 23:25:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:07:35.421 23:25:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:35.421 23:25:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:35.421 [2024-09-30 23:25:15.071193] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:07:35.421 [2024-09-30 23:25:15.071247] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:35.421 [2024-09-30 23:25:15.071265] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:07:35.421 [2024-09-30 23:25:15.071273] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:35.421 [2024-09-30 23:25:15.073250] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:35.421 [2024-09-30 23:25:15.073287] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:07:35.421 BaseBdev2 00:07:35.421 23:25:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:35.421 23:25:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:07:35.421 23:25:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:35.421 23:25:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:35.421 [2024-09-30 23:25:15.083221] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:35.421 [2024-09-30 23:25:15.085007] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:35.421 [2024-09-30 23:25:15.085174] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:07:35.421 [2024-09-30 23:25:15.085187] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:35.421 [2024-09-30 23:25:15.085426] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:07:35.421 [2024-09-30 23:25:15.085566] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:07:35.421 [2024-09-30 23:25:15.085579] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006980 00:07:35.421 [2024-09-30 23:25:15.085708] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:35.421 23:25:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:35.421 23:25:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:07:35.421 23:25:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:35.421 23:25:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:35.421 23:25:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:35.421 23:25:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:35.421 23:25:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:35.421 23:25:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:35.421 23:25:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:35.421 23:25:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:35.421 23:25:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:35.421 23:25:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:35.421 23:25:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:35.421 23:25:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:35.421 23:25:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:35.421 23:25:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:35.421 23:25:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:35.421 "name": "raid_bdev1", 00:07:35.421 "uuid": "246684ff-a43e-4033-983c-9195ed68b40e", 00:07:35.421 "strip_size_kb": 64, 00:07:35.421 "state": "online", 00:07:35.421 "raid_level": "concat", 00:07:35.421 "superblock": true, 00:07:35.421 "num_base_bdevs": 2, 00:07:35.421 "num_base_bdevs_discovered": 2, 00:07:35.421 "num_base_bdevs_operational": 2, 00:07:35.421 "base_bdevs_list": [ 00:07:35.421 { 00:07:35.421 "name": "BaseBdev1", 00:07:35.421 "uuid": "88dce2b9-a193-5c43-b46c-5b34f8d5b5f3", 00:07:35.421 "is_configured": true, 00:07:35.421 "data_offset": 2048, 00:07:35.421 "data_size": 63488 00:07:35.421 }, 00:07:35.421 { 00:07:35.421 "name": "BaseBdev2", 00:07:35.421 "uuid": "aa4d026a-e569-52c3-987e-6414e6051483", 00:07:35.421 "is_configured": true, 00:07:35.422 "data_offset": 2048, 00:07:35.422 "data_size": 63488 00:07:35.422 } 00:07:35.422 ] 00:07:35.422 }' 00:07:35.422 23:25:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:35.422 23:25:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:35.990 23:25:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:07:35.990 23:25:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:07:35.990 [2024-09-30 23:25:15.619021] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:36.927 23:25:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:07:36.927 23:25:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:36.927 23:25:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:36.927 23:25:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:36.927 23:25:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:07:36.927 23:25:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:07:36.927 23:25:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:07:36.927 23:25:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:07:36.927 23:25:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:36.927 23:25:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:36.927 23:25:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:36.927 23:25:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:36.927 23:25:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:36.927 23:25:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:36.927 23:25:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:36.927 23:25:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:36.927 23:25:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:36.927 23:25:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:36.927 23:25:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:36.927 23:25:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:36.927 23:25:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:36.927 23:25:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:36.927 23:25:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:36.927 "name": "raid_bdev1", 00:07:36.927 "uuid": "246684ff-a43e-4033-983c-9195ed68b40e", 00:07:36.927 "strip_size_kb": 64, 00:07:36.927 "state": "online", 00:07:36.927 "raid_level": "concat", 00:07:36.927 "superblock": true, 00:07:36.927 "num_base_bdevs": 2, 00:07:36.927 "num_base_bdevs_discovered": 2, 00:07:36.927 "num_base_bdevs_operational": 2, 00:07:36.927 "base_bdevs_list": [ 00:07:36.927 { 00:07:36.927 "name": "BaseBdev1", 00:07:36.927 "uuid": "88dce2b9-a193-5c43-b46c-5b34f8d5b5f3", 00:07:36.927 "is_configured": true, 00:07:36.927 "data_offset": 2048, 00:07:36.927 "data_size": 63488 00:07:36.927 }, 00:07:36.927 { 00:07:36.927 "name": "BaseBdev2", 00:07:36.928 "uuid": "aa4d026a-e569-52c3-987e-6414e6051483", 00:07:36.928 "is_configured": true, 00:07:36.928 "data_offset": 2048, 00:07:36.928 "data_size": 63488 00:07:36.928 } 00:07:36.928 ] 00:07:36.928 }' 00:07:36.928 23:25:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:36.928 23:25:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:37.187 23:25:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:37.187 23:25:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:37.187 23:25:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:37.187 [2024-09-30 23:25:16.998537] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:37.187 [2024-09-30 23:25:16.998631] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:37.187 [2024-09-30 23:25:17.001090] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:37.187 [2024-09-30 23:25:17.001175] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:37.187 [2024-09-30 23:25:17.001232] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:37.187 [2024-09-30 23:25:17.001279] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name raid_bdev1, state offline 00:07:37.187 { 00:07:37.187 "results": [ 00:07:37.187 { 00:07:37.187 "job": "raid_bdev1", 00:07:37.187 "core_mask": "0x1", 00:07:37.187 "workload": "randrw", 00:07:37.187 "percentage": 50, 00:07:37.187 "status": "finished", 00:07:37.187 "queue_depth": 1, 00:07:37.187 "io_size": 131072, 00:07:37.187 "runtime": 1.380466, 00:07:37.187 "iops": 17250.696503934178, 00:07:37.187 "mibps": 2156.337062991772, 00:07:37.187 "io_failed": 1, 00:07:37.187 "io_timeout": 0, 00:07:37.187 "avg_latency_us": 80.28500389189962, 00:07:37.187 "min_latency_us": 25.6, 00:07:37.187 "max_latency_us": 1352.216593886463 00:07:37.187 } 00:07:37.187 ], 00:07:37.187 "core_count": 1 00:07:37.187 } 00:07:37.187 23:25:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:37.187 23:25:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 73775 00:07:37.187 23:25:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 73775 ']' 00:07:37.187 23:25:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 73775 00:07:37.187 23:25:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:07:37.187 23:25:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:37.187 23:25:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73775 00:07:37.447 killing process with pid 73775 00:07:37.447 23:25:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:37.447 23:25:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:37.447 23:25:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73775' 00:07:37.447 23:25:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 73775 00:07:37.447 [2024-09-30 23:25:17.052105] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:37.447 23:25:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 73775 00:07:37.447 [2024-09-30 23:25:17.067222] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:37.707 23:25:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.iYs3T664XD 00:07:37.707 23:25:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:07:37.707 23:25:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:07:37.707 ************************************ 00:07:37.707 END TEST raid_read_error_test 00:07:37.707 ************************************ 00:07:37.707 23:25:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.72 00:07:37.707 23:25:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:07:37.707 23:25:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:37.707 23:25:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:37.707 23:25:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.72 != \0\.\0\0 ]] 00:07:37.707 00:07:37.707 real 0m3.264s 00:07:37.707 user 0m4.097s 00:07:37.707 sys 0m0.544s 00:07:37.707 23:25:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:37.707 23:25:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:37.707 23:25:17 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 2 write 00:07:37.707 23:25:17 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:07:37.707 23:25:17 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:37.707 23:25:17 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:37.707 ************************************ 00:07:37.707 START TEST raid_write_error_test 00:07:37.707 ************************************ 00:07:37.707 23:25:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test concat 2 write 00:07:37.707 23:25:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:07:37.707 23:25:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:07:37.707 23:25:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:07:37.707 23:25:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:07:37.707 23:25:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:37.707 23:25:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:07:37.707 23:25:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:37.707 23:25:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:37.707 23:25:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:07:37.707 23:25:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:37.708 23:25:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:37.708 23:25:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:37.708 23:25:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:07:37.708 23:25:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:07:37.708 23:25:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:07:37.708 23:25:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:07:37.708 23:25:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:07:37.708 23:25:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:07:37.708 23:25:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:07:37.708 23:25:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:07:37.708 23:25:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:07:37.708 23:25:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:07:37.708 23:25:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.6PQTDHcRWV 00:07:37.708 23:25:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=73908 00:07:37.708 23:25:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:07:37.708 23:25:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 73908 00:07:37.708 23:25:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 73908 ']' 00:07:37.708 23:25:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:37.708 23:25:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:37.708 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:37.708 23:25:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:37.708 23:25:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:37.708 23:25:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:37.708 [2024-09-30 23:25:17.486195] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:07:37.708 [2024-09-30 23:25:17.486327] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73908 ] 00:07:37.968 [2024-09-30 23:25:17.645847] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:37.968 [2024-09-30 23:25:17.698338] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:37.968 [2024-09-30 23:25:17.740718] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:37.968 [2024-09-30 23:25:17.740757] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:38.538 23:25:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:38.538 23:25:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:07:38.538 23:25:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:38.538 23:25:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:07:38.538 23:25:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:38.538 23:25:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:38.538 BaseBdev1_malloc 00:07:38.538 23:25:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:38.538 23:25:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:07:38.538 23:25:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:38.538 23:25:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:38.538 true 00:07:38.538 23:25:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:38.538 23:25:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:07:38.538 23:25:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:38.538 23:25:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:38.538 [2024-09-30 23:25:18.335886] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:07:38.538 [2024-09-30 23:25:18.336026] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:38.538 [2024-09-30 23:25:18.336061] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:07:38.538 [2024-09-30 23:25:18.336087] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:38.538 [2024-09-30 23:25:18.338084] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:38.538 [2024-09-30 23:25:18.338152] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:07:38.538 BaseBdev1 00:07:38.538 23:25:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:38.538 23:25:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:38.538 23:25:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:07:38.538 23:25:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:38.538 23:25:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:38.538 BaseBdev2_malloc 00:07:38.538 23:25:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:38.538 23:25:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:07:38.538 23:25:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:38.538 23:25:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:38.538 true 00:07:38.538 23:25:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:38.538 23:25:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:07:38.538 23:25:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:38.538 23:25:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:38.797 [2024-09-30 23:25:18.392882] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:07:38.797 [2024-09-30 23:25:18.393043] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:38.797 [2024-09-30 23:25:18.393104] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:07:38.797 [2024-09-30 23:25:18.393155] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:38.797 [2024-09-30 23:25:18.396398] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:38.797 BaseBdev2 00:07:38.797 [2024-09-30 23:25:18.396508] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:07:38.797 23:25:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:38.797 23:25:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:07:38.797 23:25:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:38.797 23:25:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:38.797 [2024-09-30 23:25:18.404854] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:38.797 [2024-09-30 23:25:18.407042] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:38.797 [2024-09-30 23:25:18.407283] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:07:38.797 [2024-09-30 23:25:18.407338] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:38.797 [2024-09-30 23:25:18.407625] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:07:38.797 [2024-09-30 23:25:18.407812] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:07:38.797 [2024-09-30 23:25:18.407866] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006980 00:07:38.797 [2024-09-30 23:25:18.408052] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:38.797 23:25:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:38.797 23:25:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:07:38.798 23:25:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:38.798 23:25:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:38.798 23:25:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:38.798 23:25:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:38.798 23:25:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:38.798 23:25:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:38.798 23:25:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:38.798 23:25:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:38.798 23:25:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:38.798 23:25:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:38.798 23:25:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:38.798 23:25:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:38.798 23:25:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:38.798 23:25:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:38.798 23:25:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:38.798 "name": "raid_bdev1", 00:07:38.798 "uuid": "dad415ca-e08a-4fd3-b2df-ee26cb6dcffe", 00:07:38.798 "strip_size_kb": 64, 00:07:38.798 "state": "online", 00:07:38.798 "raid_level": "concat", 00:07:38.798 "superblock": true, 00:07:38.798 "num_base_bdevs": 2, 00:07:38.798 "num_base_bdevs_discovered": 2, 00:07:38.798 "num_base_bdevs_operational": 2, 00:07:38.798 "base_bdevs_list": [ 00:07:38.798 { 00:07:38.798 "name": "BaseBdev1", 00:07:38.798 "uuid": "ab742cf5-8184-58d5-bd02-fba401fbf5f5", 00:07:38.798 "is_configured": true, 00:07:38.798 "data_offset": 2048, 00:07:38.798 "data_size": 63488 00:07:38.798 }, 00:07:38.798 { 00:07:38.798 "name": "BaseBdev2", 00:07:38.798 "uuid": "f328c788-c0d5-5083-b062-1c30e6492db7", 00:07:38.798 "is_configured": true, 00:07:38.798 "data_offset": 2048, 00:07:38.798 "data_size": 63488 00:07:38.798 } 00:07:38.798 ] 00:07:38.798 }' 00:07:38.798 23:25:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:38.798 23:25:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:39.056 23:25:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:07:39.056 23:25:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:07:39.315 [2024-09-30 23:25:18.960262] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:40.251 23:25:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:07:40.251 23:25:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:40.251 23:25:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:40.251 23:25:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:40.251 23:25:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:07:40.251 23:25:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:07:40.251 23:25:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:07:40.251 23:25:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:07:40.251 23:25:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:40.251 23:25:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:40.251 23:25:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:40.251 23:25:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:40.251 23:25:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:40.251 23:25:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:40.251 23:25:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:40.251 23:25:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:40.251 23:25:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:40.251 23:25:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:40.251 23:25:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:40.251 23:25:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:40.252 23:25:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:40.252 23:25:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:40.252 23:25:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:40.252 "name": "raid_bdev1", 00:07:40.252 "uuid": "dad415ca-e08a-4fd3-b2df-ee26cb6dcffe", 00:07:40.252 "strip_size_kb": 64, 00:07:40.252 "state": "online", 00:07:40.252 "raid_level": "concat", 00:07:40.252 "superblock": true, 00:07:40.252 "num_base_bdevs": 2, 00:07:40.252 "num_base_bdevs_discovered": 2, 00:07:40.252 "num_base_bdevs_operational": 2, 00:07:40.252 "base_bdevs_list": [ 00:07:40.252 { 00:07:40.252 "name": "BaseBdev1", 00:07:40.252 "uuid": "ab742cf5-8184-58d5-bd02-fba401fbf5f5", 00:07:40.252 "is_configured": true, 00:07:40.252 "data_offset": 2048, 00:07:40.252 "data_size": 63488 00:07:40.252 }, 00:07:40.252 { 00:07:40.252 "name": "BaseBdev2", 00:07:40.252 "uuid": "f328c788-c0d5-5083-b062-1c30e6492db7", 00:07:40.252 "is_configured": true, 00:07:40.252 "data_offset": 2048, 00:07:40.252 "data_size": 63488 00:07:40.252 } 00:07:40.252 ] 00:07:40.252 }' 00:07:40.252 23:25:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:40.252 23:25:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:40.511 23:25:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:40.511 23:25:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:40.511 23:25:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:40.511 [2024-09-30 23:25:20.263319] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:40.511 [2024-09-30 23:25:20.263436] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:40.511 [2024-09-30 23:25:20.265853] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:40.511 [2024-09-30 23:25:20.265908] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:40.511 [2024-09-30 23:25:20.265941] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:40.511 [2024-09-30 23:25:20.265949] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name raid_bdev1, state offline 00:07:40.511 { 00:07:40.511 "results": [ 00:07:40.511 { 00:07:40.511 "job": "raid_bdev1", 00:07:40.511 "core_mask": "0x1", 00:07:40.511 "workload": "randrw", 00:07:40.511 "percentage": 50, 00:07:40.511 "status": "finished", 00:07:40.511 "queue_depth": 1, 00:07:40.511 "io_size": 131072, 00:07:40.511 "runtime": 1.303929, 00:07:40.511 "iops": 18161.2649154977, 00:07:40.511 "mibps": 2270.1581144372126, 00:07:40.511 "io_failed": 1, 00:07:40.511 "io_timeout": 0, 00:07:40.511 "avg_latency_us": 76.17667249719631, 00:07:40.511 "min_latency_us": 24.258515283842794, 00:07:40.511 "max_latency_us": 1323.598253275109 00:07:40.511 } 00:07:40.511 ], 00:07:40.511 "core_count": 1 00:07:40.511 } 00:07:40.511 23:25:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:40.511 23:25:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 73908 00:07:40.511 23:25:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 73908 ']' 00:07:40.511 23:25:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 73908 00:07:40.511 23:25:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:07:40.511 23:25:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:40.511 23:25:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73908 00:07:40.511 23:25:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:40.511 23:25:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:40.511 killing process with pid 73908 00:07:40.511 23:25:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73908' 00:07:40.511 23:25:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 73908 00:07:40.511 [2024-09-30 23:25:20.317855] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:40.511 23:25:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 73908 00:07:40.511 [2024-09-30 23:25:20.333401] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:40.769 23:25:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.6PQTDHcRWV 00:07:40.769 23:25:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:07:40.769 23:25:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:07:40.769 23:25:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.77 00:07:40.769 23:25:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:07:40.769 23:25:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:40.769 23:25:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:40.769 23:25:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.77 != \0\.\0\0 ]] 00:07:40.769 00:07:40.769 real 0m3.194s 00:07:40.769 user 0m4.008s 00:07:40.769 sys 0m0.511s 00:07:40.769 23:25:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:40.769 23:25:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:40.769 ************************************ 00:07:40.769 END TEST raid_write_error_test 00:07:40.769 ************************************ 00:07:41.028 23:25:20 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:07:41.028 23:25:20 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 2 false 00:07:41.028 23:25:20 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:07:41.028 23:25:20 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:41.028 23:25:20 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:41.028 ************************************ 00:07:41.028 START TEST raid_state_function_test 00:07:41.028 ************************************ 00:07:41.028 23:25:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 2 false 00:07:41.028 23:25:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:07:41.028 23:25:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:07:41.028 23:25:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:07:41.028 23:25:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:07:41.028 23:25:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:07:41.028 23:25:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:41.028 23:25:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:07:41.028 23:25:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:41.028 23:25:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:41.028 23:25:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:07:41.028 23:25:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:41.028 23:25:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:41.028 23:25:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:41.028 23:25:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:07:41.028 23:25:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:07:41.028 23:25:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:07:41.028 23:25:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:07:41.028 23:25:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:07:41.028 23:25:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:07:41.028 23:25:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:07:41.028 23:25:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:07:41.028 23:25:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:07:41.028 23:25:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=74036 00:07:41.028 23:25:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:41.028 23:25:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 74036' 00:07:41.028 Process raid pid: 74036 00:07:41.028 23:25:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 74036 00:07:41.028 23:25:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 74036 ']' 00:07:41.028 23:25:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:41.028 23:25:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:41.028 23:25:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:41.028 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:41.028 23:25:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:41.028 23:25:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:41.028 [2024-09-30 23:25:20.747368] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:07:41.028 [2024-09-30 23:25:20.747499] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:41.287 [2024-09-30 23:25:20.912301] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:41.287 [2024-09-30 23:25:20.955589] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:41.287 [2024-09-30 23:25:20.997059] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:41.287 [2024-09-30 23:25:20.997094] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:41.855 23:25:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:41.855 23:25:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:07:41.855 23:25:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:41.855 23:25:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:41.855 23:25:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:41.855 [2024-09-30 23:25:21.574233] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:41.855 [2024-09-30 23:25:21.574373] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:41.855 [2024-09-30 23:25:21.574422] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:41.855 [2024-09-30 23:25:21.574446] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:41.855 23:25:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:41.855 23:25:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:07:41.855 23:25:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:41.855 23:25:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:41.855 23:25:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:41.856 23:25:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:41.856 23:25:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:41.856 23:25:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:41.856 23:25:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:41.856 23:25:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:41.856 23:25:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:41.856 23:25:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:41.856 23:25:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:41.856 23:25:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:41.856 23:25:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:41.856 23:25:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:41.856 23:25:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:41.856 "name": "Existed_Raid", 00:07:41.856 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:41.856 "strip_size_kb": 0, 00:07:41.856 "state": "configuring", 00:07:41.856 "raid_level": "raid1", 00:07:41.856 "superblock": false, 00:07:41.856 "num_base_bdevs": 2, 00:07:41.856 "num_base_bdevs_discovered": 0, 00:07:41.856 "num_base_bdevs_operational": 2, 00:07:41.856 "base_bdevs_list": [ 00:07:41.856 { 00:07:41.856 "name": "BaseBdev1", 00:07:41.856 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:41.856 "is_configured": false, 00:07:41.856 "data_offset": 0, 00:07:41.856 "data_size": 0 00:07:41.856 }, 00:07:41.856 { 00:07:41.856 "name": "BaseBdev2", 00:07:41.856 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:41.856 "is_configured": false, 00:07:41.856 "data_offset": 0, 00:07:41.856 "data_size": 0 00:07:41.856 } 00:07:41.856 ] 00:07:41.856 }' 00:07:41.856 23:25:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:41.856 23:25:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:42.425 23:25:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:42.425 23:25:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:42.425 23:25:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:42.425 [2024-09-30 23:25:22.005408] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:42.425 [2024-09-30 23:25:22.005501] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring 00:07:42.425 23:25:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:42.425 23:25:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:42.426 23:25:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:42.426 23:25:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:42.426 [2024-09-30 23:25:22.017406] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:42.426 [2024-09-30 23:25:22.017490] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:42.426 [2024-09-30 23:25:22.017517] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:42.426 [2024-09-30 23:25:22.017537] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:42.426 23:25:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:42.426 23:25:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:07:42.426 23:25:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:42.426 23:25:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:42.426 [2024-09-30 23:25:22.038139] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:42.426 BaseBdev1 00:07:42.426 23:25:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:42.426 23:25:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:07:42.426 23:25:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:07:42.426 23:25:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:07:42.426 23:25:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:07:42.426 23:25:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:07:42.426 23:25:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:07:42.426 23:25:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:07:42.426 23:25:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:42.426 23:25:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:42.426 23:25:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:42.426 23:25:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:42.426 23:25:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:42.426 23:25:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:42.426 [ 00:07:42.426 { 00:07:42.426 "name": "BaseBdev1", 00:07:42.426 "aliases": [ 00:07:42.426 "871bfa24-f846-4b2b-a0ac-0bfd604dc6ba" 00:07:42.426 ], 00:07:42.426 "product_name": "Malloc disk", 00:07:42.426 "block_size": 512, 00:07:42.426 "num_blocks": 65536, 00:07:42.426 "uuid": "871bfa24-f846-4b2b-a0ac-0bfd604dc6ba", 00:07:42.426 "assigned_rate_limits": { 00:07:42.426 "rw_ios_per_sec": 0, 00:07:42.426 "rw_mbytes_per_sec": 0, 00:07:42.426 "r_mbytes_per_sec": 0, 00:07:42.426 "w_mbytes_per_sec": 0 00:07:42.426 }, 00:07:42.426 "claimed": true, 00:07:42.426 "claim_type": "exclusive_write", 00:07:42.426 "zoned": false, 00:07:42.426 "supported_io_types": { 00:07:42.426 "read": true, 00:07:42.426 "write": true, 00:07:42.426 "unmap": true, 00:07:42.426 "flush": true, 00:07:42.426 "reset": true, 00:07:42.426 "nvme_admin": false, 00:07:42.426 "nvme_io": false, 00:07:42.426 "nvme_io_md": false, 00:07:42.426 "write_zeroes": true, 00:07:42.426 "zcopy": true, 00:07:42.426 "get_zone_info": false, 00:07:42.426 "zone_management": false, 00:07:42.426 "zone_append": false, 00:07:42.426 "compare": false, 00:07:42.426 "compare_and_write": false, 00:07:42.426 "abort": true, 00:07:42.426 "seek_hole": false, 00:07:42.426 "seek_data": false, 00:07:42.426 "copy": true, 00:07:42.426 "nvme_iov_md": false 00:07:42.426 }, 00:07:42.426 "memory_domains": [ 00:07:42.426 { 00:07:42.426 "dma_device_id": "system", 00:07:42.426 "dma_device_type": 1 00:07:42.426 }, 00:07:42.426 { 00:07:42.426 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:42.426 "dma_device_type": 2 00:07:42.426 } 00:07:42.426 ], 00:07:42.426 "driver_specific": {} 00:07:42.426 } 00:07:42.426 ] 00:07:42.426 23:25:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:42.426 23:25:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:07:42.426 23:25:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:07:42.426 23:25:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:42.426 23:25:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:42.426 23:25:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:42.426 23:25:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:42.426 23:25:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:42.426 23:25:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:42.426 23:25:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:42.426 23:25:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:42.426 23:25:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:42.426 23:25:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:42.426 23:25:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:42.426 23:25:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:42.426 23:25:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:42.426 23:25:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:42.426 23:25:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:42.426 "name": "Existed_Raid", 00:07:42.426 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:42.426 "strip_size_kb": 0, 00:07:42.426 "state": "configuring", 00:07:42.426 "raid_level": "raid1", 00:07:42.426 "superblock": false, 00:07:42.426 "num_base_bdevs": 2, 00:07:42.426 "num_base_bdevs_discovered": 1, 00:07:42.426 "num_base_bdevs_operational": 2, 00:07:42.426 "base_bdevs_list": [ 00:07:42.426 { 00:07:42.426 "name": "BaseBdev1", 00:07:42.426 "uuid": "871bfa24-f846-4b2b-a0ac-0bfd604dc6ba", 00:07:42.426 "is_configured": true, 00:07:42.426 "data_offset": 0, 00:07:42.426 "data_size": 65536 00:07:42.426 }, 00:07:42.426 { 00:07:42.426 "name": "BaseBdev2", 00:07:42.426 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:42.426 "is_configured": false, 00:07:42.426 "data_offset": 0, 00:07:42.426 "data_size": 0 00:07:42.426 } 00:07:42.426 ] 00:07:42.426 }' 00:07:42.426 23:25:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:42.426 23:25:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:42.685 23:25:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:42.685 23:25:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:42.685 23:25:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:42.685 [2024-09-30 23:25:22.477441] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:42.685 [2024-09-30 23:25:22.477572] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring 00:07:42.685 23:25:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:42.685 23:25:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:42.685 23:25:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:42.685 23:25:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:42.685 [2024-09-30 23:25:22.489413] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:42.685 [2024-09-30 23:25:22.491165] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:42.685 [2024-09-30 23:25:22.491243] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:42.685 23:25:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:42.685 23:25:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:07:42.685 23:25:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:42.685 23:25:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:07:42.685 23:25:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:42.685 23:25:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:42.685 23:25:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:42.685 23:25:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:42.685 23:25:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:42.685 23:25:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:42.685 23:25:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:42.685 23:25:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:42.685 23:25:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:42.685 23:25:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:42.685 23:25:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:42.685 23:25:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:42.685 23:25:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:42.685 23:25:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:42.685 23:25:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:42.685 "name": "Existed_Raid", 00:07:42.685 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:42.685 "strip_size_kb": 0, 00:07:42.685 "state": "configuring", 00:07:42.685 "raid_level": "raid1", 00:07:42.685 "superblock": false, 00:07:42.685 "num_base_bdevs": 2, 00:07:42.685 "num_base_bdevs_discovered": 1, 00:07:42.685 "num_base_bdevs_operational": 2, 00:07:42.685 "base_bdevs_list": [ 00:07:42.685 { 00:07:42.685 "name": "BaseBdev1", 00:07:42.685 "uuid": "871bfa24-f846-4b2b-a0ac-0bfd604dc6ba", 00:07:42.685 "is_configured": true, 00:07:42.685 "data_offset": 0, 00:07:42.685 "data_size": 65536 00:07:42.685 }, 00:07:42.685 { 00:07:42.685 "name": "BaseBdev2", 00:07:42.685 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:42.685 "is_configured": false, 00:07:42.685 "data_offset": 0, 00:07:42.685 "data_size": 0 00:07:42.685 } 00:07:42.685 ] 00:07:42.685 }' 00:07:42.685 23:25:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:42.685 23:25:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:43.254 23:25:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:07:43.254 23:25:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:43.254 23:25:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:43.254 [2024-09-30 23:25:22.932509] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:43.254 [2024-09-30 23:25:22.932643] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:07:43.254 [2024-09-30 23:25:22.932690] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:07:43.254 [2024-09-30 23:25:22.933088] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:07:43.254 [2024-09-30 23:25:22.933300] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:07:43.254 [2024-09-30 23:25:22.933362] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980 00:07:43.254 [2024-09-30 23:25:22.933655] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:43.254 BaseBdev2 00:07:43.254 23:25:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:43.254 23:25:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:07:43.254 23:25:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:07:43.254 23:25:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:07:43.254 23:25:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:07:43.254 23:25:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:07:43.254 23:25:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:07:43.254 23:25:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:07:43.254 23:25:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:43.254 23:25:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:43.254 23:25:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:43.254 23:25:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:43.254 23:25:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:43.254 23:25:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:43.254 [ 00:07:43.254 { 00:07:43.254 "name": "BaseBdev2", 00:07:43.254 "aliases": [ 00:07:43.254 "b49f6837-2cff-4392-8cc9-51c2e69c0d96" 00:07:43.254 ], 00:07:43.254 "product_name": "Malloc disk", 00:07:43.254 "block_size": 512, 00:07:43.254 "num_blocks": 65536, 00:07:43.254 "uuid": "b49f6837-2cff-4392-8cc9-51c2e69c0d96", 00:07:43.254 "assigned_rate_limits": { 00:07:43.254 "rw_ios_per_sec": 0, 00:07:43.254 "rw_mbytes_per_sec": 0, 00:07:43.254 "r_mbytes_per_sec": 0, 00:07:43.254 "w_mbytes_per_sec": 0 00:07:43.254 }, 00:07:43.254 "claimed": true, 00:07:43.254 "claim_type": "exclusive_write", 00:07:43.254 "zoned": false, 00:07:43.254 "supported_io_types": { 00:07:43.254 "read": true, 00:07:43.254 "write": true, 00:07:43.254 "unmap": true, 00:07:43.254 "flush": true, 00:07:43.254 "reset": true, 00:07:43.254 "nvme_admin": false, 00:07:43.254 "nvme_io": false, 00:07:43.254 "nvme_io_md": false, 00:07:43.254 "write_zeroes": true, 00:07:43.254 "zcopy": true, 00:07:43.254 "get_zone_info": false, 00:07:43.254 "zone_management": false, 00:07:43.255 "zone_append": false, 00:07:43.255 "compare": false, 00:07:43.255 "compare_and_write": false, 00:07:43.255 "abort": true, 00:07:43.255 "seek_hole": false, 00:07:43.255 "seek_data": false, 00:07:43.255 "copy": true, 00:07:43.255 "nvme_iov_md": false 00:07:43.255 }, 00:07:43.255 "memory_domains": [ 00:07:43.255 { 00:07:43.255 "dma_device_id": "system", 00:07:43.255 "dma_device_type": 1 00:07:43.255 }, 00:07:43.255 { 00:07:43.255 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:43.255 "dma_device_type": 2 00:07:43.255 } 00:07:43.255 ], 00:07:43.255 "driver_specific": {} 00:07:43.255 } 00:07:43.255 ] 00:07:43.255 23:25:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:43.255 23:25:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:07:43.255 23:25:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:07:43.255 23:25:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:43.255 23:25:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:07:43.255 23:25:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:43.255 23:25:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:43.255 23:25:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:43.255 23:25:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:43.255 23:25:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:43.255 23:25:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:43.255 23:25:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:43.255 23:25:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:43.255 23:25:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:43.255 23:25:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:43.255 23:25:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:43.255 23:25:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:43.255 23:25:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:43.255 23:25:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:43.255 23:25:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:43.255 "name": "Existed_Raid", 00:07:43.255 "uuid": "4d91b1c4-e480-4618-9bf0-8667fe2b3d57", 00:07:43.255 "strip_size_kb": 0, 00:07:43.255 "state": "online", 00:07:43.255 "raid_level": "raid1", 00:07:43.255 "superblock": false, 00:07:43.255 "num_base_bdevs": 2, 00:07:43.255 "num_base_bdevs_discovered": 2, 00:07:43.255 "num_base_bdevs_operational": 2, 00:07:43.255 "base_bdevs_list": [ 00:07:43.255 { 00:07:43.255 "name": "BaseBdev1", 00:07:43.255 "uuid": "871bfa24-f846-4b2b-a0ac-0bfd604dc6ba", 00:07:43.255 "is_configured": true, 00:07:43.255 "data_offset": 0, 00:07:43.255 "data_size": 65536 00:07:43.255 }, 00:07:43.255 { 00:07:43.255 "name": "BaseBdev2", 00:07:43.255 "uuid": "b49f6837-2cff-4392-8cc9-51c2e69c0d96", 00:07:43.255 "is_configured": true, 00:07:43.255 "data_offset": 0, 00:07:43.255 "data_size": 65536 00:07:43.255 } 00:07:43.255 ] 00:07:43.255 }' 00:07:43.255 23:25:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:43.255 23:25:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:43.514 23:25:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:07:43.514 23:25:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:07:43.514 23:25:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:43.514 23:25:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:43.514 23:25:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:43.514 23:25:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:43.514 23:25:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:07:43.514 23:25:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:43.514 23:25:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:43.514 23:25:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:43.775 [2024-09-30 23:25:23.372071] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:43.775 23:25:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:43.775 23:25:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:43.775 "name": "Existed_Raid", 00:07:43.775 "aliases": [ 00:07:43.775 "4d91b1c4-e480-4618-9bf0-8667fe2b3d57" 00:07:43.775 ], 00:07:43.775 "product_name": "Raid Volume", 00:07:43.775 "block_size": 512, 00:07:43.775 "num_blocks": 65536, 00:07:43.775 "uuid": "4d91b1c4-e480-4618-9bf0-8667fe2b3d57", 00:07:43.775 "assigned_rate_limits": { 00:07:43.775 "rw_ios_per_sec": 0, 00:07:43.775 "rw_mbytes_per_sec": 0, 00:07:43.775 "r_mbytes_per_sec": 0, 00:07:43.775 "w_mbytes_per_sec": 0 00:07:43.775 }, 00:07:43.775 "claimed": false, 00:07:43.775 "zoned": false, 00:07:43.775 "supported_io_types": { 00:07:43.775 "read": true, 00:07:43.775 "write": true, 00:07:43.775 "unmap": false, 00:07:43.775 "flush": false, 00:07:43.775 "reset": true, 00:07:43.775 "nvme_admin": false, 00:07:43.775 "nvme_io": false, 00:07:43.775 "nvme_io_md": false, 00:07:43.775 "write_zeroes": true, 00:07:43.775 "zcopy": false, 00:07:43.775 "get_zone_info": false, 00:07:43.775 "zone_management": false, 00:07:43.775 "zone_append": false, 00:07:43.775 "compare": false, 00:07:43.775 "compare_and_write": false, 00:07:43.775 "abort": false, 00:07:43.775 "seek_hole": false, 00:07:43.775 "seek_data": false, 00:07:43.775 "copy": false, 00:07:43.775 "nvme_iov_md": false 00:07:43.775 }, 00:07:43.775 "memory_domains": [ 00:07:43.775 { 00:07:43.775 "dma_device_id": "system", 00:07:43.775 "dma_device_type": 1 00:07:43.775 }, 00:07:43.775 { 00:07:43.775 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:43.775 "dma_device_type": 2 00:07:43.775 }, 00:07:43.775 { 00:07:43.775 "dma_device_id": "system", 00:07:43.775 "dma_device_type": 1 00:07:43.775 }, 00:07:43.775 { 00:07:43.775 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:43.775 "dma_device_type": 2 00:07:43.775 } 00:07:43.775 ], 00:07:43.775 "driver_specific": { 00:07:43.775 "raid": { 00:07:43.775 "uuid": "4d91b1c4-e480-4618-9bf0-8667fe2b3d57", 00:07:43.775 "strip_size_kb": 0, 00:07:43.775 "state": "online", 00:07:43.775 "raid_level": "raid1", 00:07:43.775 "superblock": false, 00:07:43.775 "num_base_bdevs": 2, 00:07:43.775 "num_base_bdevs_discovered": 2, 00:07:43.775 "num_base_bdevs_operational": 2, 00:07:43.775 "base_bdevs_list": [ 00:07:43.775 { 00:07:43.775 "name": "BaseBdev1", 00:07:43.775 "uuid": "871bfa24-f846-4b2b-a0ac-0bfd604dc6ba", 00:07:43.775 "is_configured": true, 00:07:43.775 "data_offset": 0, 00:07:43.775 "data_size": 65536 00:07:43.775 }, 00:07:43.775 { 00:07:43.775 "name": "BaseBdev2", 00:07:43.775 "uuid": "b49f6837-2cff-4392-8cc9-51c2e69c0d96", 00:07:43.775 "is_configured": true, 00:07:43.775 "data_offset": 0, 00:07:43.775 "data_size": 65536 00:07:43.775 } 00:07:43.775 ] 00:07:43.775 } 00:07:43.775 } 00:07:43.775 }' 00:07:43.775 23:25:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:43.775 23:25:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:07:43.775 BaseBdev2' 00:07:43.775 23:25:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:43.775 23:25:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:43.775 23:25:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:43.775 23:25:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:43.775 23:25:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:07:43.775 23:25:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:43.775 23:25:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:43.775 23:25:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:43.775 23:25:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:43.775 23:25:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:43.775 23:25:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:43.775 23:25:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:07:43.775 23:25:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:43.775 23:25:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:43.775 23:25:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:43.775 23:25:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:43.775 23:25:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:43.775 23:25:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:43.775 23:25:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:07:43.775 23:25:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:43.775 23:25:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:43.775 [2024-09-30 23:25:23.575468] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:43.775 23:25:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:43.775 23:25:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:07:43.775 23:25:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:07:43.775 23:25:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:43.775 23:25:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:07:43.775 23:25:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:07:43.775 23:25:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:07:43.775 23:25:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:43.775 23:25:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:43.775 23:25:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:43.775 23:25:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:43.775 23:25:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:43.775 23:25:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:43.775 23:25:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:43.775 23:25:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:43.775 23:25:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:43.775 23:25:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:43.775 23:25:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:43.775 23:25:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:43.775 23:25:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:43.775 23:25:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:44.035 23:25:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:44.035 "name": "Existed_Raid", 00:07:44.035 "uuid": "4d91b1c4-e480-4618-9bf0-8667fe2b3d57", 00:07:44.035 "strip_size_kb": 0, 00:07:44.035 "state": "online", 00:07:44.035 "raid_level": "raid1", 00:07:44.035 "superblock": false, 00:07:44.035 "num_base_bdevs": 2, 00:07:44.035 "num_base_bdevs_discovered": 1, 00:07:44.035 "num_base_bdevs_operational": 1, 00:07:44.035 "base_bdevs_list": [ 00:07:44.035 { 00:07:44.035 "name": null, 00:07:44.035 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:44.035 "is_configured": false, 00:07:44.035 "data_offset": 0, 00:07:44.035 "data_size": 65536 00:07:44.035 }, 00:07:44.035 { 00:07:44.035 "name": "BaseBdev2", 00:07:44.035 "uuid": "b49f6837-2cff-4392-8cc9-51c2e69c0d96", 00:07:44.035 "is_configured": true, 00:07:44.035 "data_offset": 0, 00:07:44.035 "data_size": 65536 00:07:44.035 } 00:07:44.035 ] 00:07:44.035 }' 00:07:44.035 23:25:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:44.035 23:25:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:44.294 23:25:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:07:44.294 23:25:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:44.294 23:25:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:44.294 23:25:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:07:44.294 23:25:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:44.294 23:25:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:44.294 23:25:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:44.294 23:25:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:07:44.294 23:25:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:44.294 23:25:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:07:44.294 23:25:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:44.294 23:25:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:44.294 [2024-09-30 23:25:24.034311] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:44.294 [2024-09-30 23:25:24.034463] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:44.294 [2024-09-30 23:25:24.046312] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:44.295 [2024-09-30 23:25:24.046437] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:44.295 [2024-09-30 23:25:24.046478] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline 00:07:44.295 23:25:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:44.295 23:25:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:07:44.295 23:25:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:44.295 23:25:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:44.295 23:25:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:07:44.295 23:25:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:44.295 23:25:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:44.295 23:25:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:44.295 23:25:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:07:44.295 23:25:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:07:44.295 23:25:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:07:44.295 23:25:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 74036 00:07:44.295 23:25:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 74036 ']' 00:07:44.295 23:25:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 74036 00:07:44.295 23:25:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:07:44.295 23:25:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:44.295 23:25:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 74036 00:07:44.295 killing process with pid 74036 00:07:44.295 23:25:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:44.295 23:25:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:44.295 23:25:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 74036' 00:07:44.295 23:25:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 74036 00:07:44.295 [2024-09-30 23:25:24.141293] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:44.295 23:25:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 74036 00:07:44.295 [2024-09-30 23:25:24.142278] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:44.554 23:25:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:07:44.554 00:07:44.554 real 0m3.737s 00:07:44.554 user 0m5.798s 00:07:44.554 sys 0m0.779s 00:07:44.554 ************************************ 00:07:44.554 END TEST raid_state_function_test 00:07:44.554 ************************************ 00:07:44.554 23:25:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:44.554 23:25:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:44.814 23:25:24 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 2 true 00:07:44.814 23:25:24 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:07:44.814 23:25:24 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:44.814 23:25:24 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:44.814 ************************************ 00:07:44.814 START TEST raid_state_function_test_sb 00:07:44.814 ************************************ 00:07:44.814 23:25:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 2 true 00:07:44.814 23:25:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:07:44.814 23:25:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:07:44.814 23:25:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:07:44.814 23:25:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:07:44.814 23:25:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:07:44.814 23:25:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:44.814 23:25:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:07:44.814 23:25:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:44.814 23:25:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:44.814 23:25:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:07:44.814 23:25:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:44.814 23:25:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:44.814 23:25:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:44.814 23:25:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:07:44.814 23:25:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:07:44.814 23:25:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:07:44.814 23:25:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:07:44.814 23:25:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:07:44.814 23:25:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:07:44.814 23:25:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:07:44.814 23:25:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:07:44.814 23:25:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:07:44.814 23:25:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=74273 00:07:44.814 23:25:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:44.814 23:25:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 74273' 00:07:44.814 Process raid pid: 74273 00:07:44.814 23:25:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 74273 00:07:44.814 23:25:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 74273 ']' 00:07:44.814 23:25:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:44.814 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:44.814 23:25:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:44.814 23:25:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:44.814 23:25:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:44.814 23:25:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:44.814 [2024-09-30 23:25:24.551839] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:07:44.814 [2024-09-30 23:25:24.551962] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:45.074 [2024-09-30 23:25:24.713272] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:45.074 [2024-09-30 23:25:24.758929] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:45.074 [2024-09-30 23:25:24.802326] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:45.074 [2024-09-30 23:25:24.802424] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:45.649 23:25:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:45.649 23:25:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:07:45.649 23:25:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:45.649 23:25:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:45.649 23:25:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:45.649 [2024-09-30 23:25:25.380419] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:45.649 [2024-09-30 23:25:25.380549] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:45.649 [2024-09-30 23:25:25.380588] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:45.649 [2024-09-30 23:25:25.380615] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:45.649 23:25:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:45.649 23:25:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:07:45.649 23:25:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:45.649 23:25:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:45.649 23:25:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:45.649 23:25:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:45.649 23:25:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:45.649 23:25:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:45.649 23:25:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:45.649 23:25:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:45.649 23:25:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:45.649 23:25:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:45.649 23:25:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:45.649 23:25:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:45.649 23:25:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:45.649 23:25:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:45.649 23:25:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:45.649 "name": "Existed_Raid", 00:07:45.649 "uuid": "d13f06ed-5180-4610-9440-0059833c0f10", 00:07:45.649 "strip_size_kb": 0, 00:07:45.649 "state": "configuring", 00:07:45.649 "raid_level": "raid1", 00:07:45.649 "superblock": true, 00:07:45.649 "num_base_bdevs": 2, 00:07:45.649 "num_base_bdevs_discovered": 0, 00:07:45.649 "num_base_bdevs_operational": 2, 00:07:45.649 "base_bdevs_list": [ 00:07:45.649 { 00:07:45.649 "name": "BaseBdev1", 00:07:45.649 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:45.649 "is_configured": false, 00:07:45.649 "data_offset": 0, 00:07:45.649 "data_size": 0 00:07:45.649 }, 00:07:45.649 { 00:07:45.649 "name": "BaseBdev2", 00:07:45.649 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:45.649 "is_configured": false, 00:07:45.649 "data_offset": 0, 00:07:45.649 "data_size": 0 00:07:45.649 } 00:07:45.649 ] 00:07:45.649 }' 00:07:45.649 23:25:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:45.649 23:25:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:46.242 23:25:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:46.242 23:25:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:46.242 23:25:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:46.242 [2024-09-30 23:25:25.783615] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:46.242 [2024-09-30 23:25:25.783709] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring 00:07:46.242 23:25:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:46.242 23:25:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:46.242 23:25:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:46.242 23:25:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:46.242 [2024-09-30 23:25:25.791642] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:46.242 [2024-09-30 23:25:25.791724] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:46.242 [2024-09-30 23:25:25.791735] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:46.242 [2024-09-30 23:25:25.791745] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:46.242 23:25:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:46.242 23:25:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:07:46.242 23:25:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:46.242 23:25:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:46.242 [2024-09-30 23:25:25.808219] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:46.242 BaseBdev1 00:07:46.242 23:25:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:46.242 23:25:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:07:46.242 23:25:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:07:46.242 23:25:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:07:46.242 23:25:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:07:46.242 23:25:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:07:46.242 23:25:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:07:46.242 23:25:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:07:46.242 23:25:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:46.242 23:25:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:46.242 23:25:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:46.242 23:25:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:46.242 23:25:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:46.242 23:25:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:46.242 [ 00:07:46.242 { 00:07:46.242 "name": "BaseBdev1", 00:07:46.242 "aliases": [ 00:07:46.242 "eea4fe23-7b0b-4080-88a8-803c7103117b" 00:07:46.242 ], 00:07:46.242 "product_name": "Malloc disk", 00:07:46.242 "block_size": 512, 00:07:46.242 "num_blocks": 65536, 00:07:46.242 "uuid": "eea4fe23-7b0b-4080-88a8-803c7103117b", 00:07:46.242 "assigned_rate_limits": { 00:07:46.242 "rw_ios_per_sec": 0, 00:07:46.242 "rw_mbytes_per_sec": 0, 00:07:46.242 "r_mbytes_per_sec": 0, 00:07:46.242 "w_mbytes_per_sec": 0 00:07:46.242 }, 00:07:46.242 "claimed": true, 00:07:46.242 "claim_type": "exclusive_write", 00:07:46.242 "zoned": false, 00:07:46.242 "supported_io_types": { 00:07:46.242 "read": true, 00:07:46.242 "write": true, 00:07:46.242 "unmap": true, 00:07:46.242 "flush": true, 00:07:46.242 "reset": true, 00:07:46.242 "nvme_admin": false, 00:07:46.242 "nvme_io": false, 00:07:46.242 "nvme_io_md": false, 00:07:46.242 "write_zeroes": true, 00:07:46.242 "zcopy": true, 00:07:46.242 "get_zone_info": false, 00:07:46.242 "zone_management": false, 00:07:46.242 "zone_append": false, 00:07:46.242 "compare": false, 00:07:46.242 "compare_and_write": false, 00:07:46.242 "abort": true, 00:07:46.242 "seek_hole": false, 00:07:46.242 "seek_data": false, 00:07:46.242 "copy": true, 00:07:46.242 "nvme_iov_md": false 00:07:46.242 }, 00:07:46.242 "memory_domains": [ 00:07:46.242 { 00:07:46.242 "dma_device_id": "system", 00:07:46.242 "dma_device_type": 1 00:07:46.242 }, 00:07:46.242 { 00:07:46.242 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:46.242 "dma_device_type": 2 00:07:46.242 } 00:07:46.242 ], 00:07:46.242 "driver_specific": {} 00:07:46.242 } 00:07:46.242 ] 00:07:46.242 23:25:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:46.242 23:25:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:07:46.242 23:25:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:07:46.242 23:25:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:46.242 23:25:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:46.242 23:25:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:46.242 23:25:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:46.242 23:25:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:46.242 23:25:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:46.242 23:25:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:46.242 23:25:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:46.242 23:25:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:46.242 23:25:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:46.242 23:25:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:46.242 23:25:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:46.242 23:25:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:46.242 23:25:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:46.242 23:25:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:46.242 "name": "Existed_Raid", 00:07:46.242 "uuid": "ecd1c549-231e-42b5-9f29-63eb81f763d9", 00:07:46.242 "strip_size_kb": 0, 00:07:46.242 "state": "configuring", 00:07:46.242 "raid_level": "raid1", 00:07:46.242 "superblock": true, 00:07:46.242 "num_base_bdevs": 2, 00:07:46.242 "num_base_bdevs_discovered": 1, 00:07:46.242 "num_base_bdevs_operational": 2, 00:07:46.242 "base_bdevs_list": [ 00:07:46.242 { 00:07:46.242 "name": "BaseBdev1", 00:07:46.242 "uuid": "eea4fe23-7b0b-4080-88a8-803c7103117b", 00:07:46.242 "is_configured": true, 00:07:46.242 "data_offset": 2048, 00:07:46.242 "data_size": 63488 00:07:46.242 }, 00:07:46.242 { 00:07:46.242 "name": "BaseBdev2", 00:07:46.242 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:46.242 "is_configured": false, 00:07:46.242 "data_offset": 0, 00:07:46.242 "data_size": 0 00:07:46.242 } 00:07:46.242 ] 00:07:46.242 }' 00:07:46.242 23:25:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:46.242 23:25:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:46.502 23:25:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:46.502 23:25:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:46.502 23:25:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:46.502 [2024-09-30 23:25:26.227519] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:46.502 [2024-09-30 23:25:26.227615] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring 00:07:46.502 23:25:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:46.502 23:25:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:46.502 23:25:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:46.502 23:25:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:46.502 [2024-09-30 23:25:26.239542] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:46.502 [2024-09-30 23:25:26.241357] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:46.502 [2024-09-30 23:25:26.241430] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:46.502 23:25:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:46.502 23:25:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:07:46.502 23:25:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:46.502 23:25:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:07:46.502 23:25:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:46.502 23:25:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:46.502 23:25:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:46.502 23:25:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:46.502 23:25:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:46.502 23:25:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:46.502 23:25:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:46.502 23:25:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:46.502 23:25:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:46.502 23:25:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:46.502 23:25:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:46.503 23:25:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:46.503 23:25:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:46.503 23:25:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:46.503 23:25:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:46.503 "name": "Existed_Raid", 00:07:46.503 "uuid": "123e6da2-71a7-4a0a-955a-737b7cf70505", 00:07:46.503 "strip_size_kb": 0, 00:07:46.503 "state": "configuring", 00:07:46.503 "raid_level": "raid1", 00:07:46.503 "superblock": true, 00:07:46.503 "num_base_bdevs": 2, 00:07:46.503 "num_base_bdevs_discovered": 1, 00:07:46.503 "num_base_bdevs_operational": 2, 00:07:46.503 "base_bdevs_list": [ 00:07:46.503 { 00:07:46.503 "name": "BaseBdev1", 00:07:46.503 "uuid": "eea4fe23-7b0b-4080-88a8-803c7103117b", 00:07:46.503 "is_configured": true, 00:07:46.503 "data_offset": 2048, 00:07:46.503 "data_size": 63488 00:07:46.503 }, 00:07:46.503 { 00:07:46.503 "name": "BaseBdev2", 00:07:46.503 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:46.503 "is_configured": false, 00:07:46.503 "data_offset": 0, 00:07:46.503 "data_size": 0 00:07:46.503 } 00:07:46.503 ] 00:07:46.503 }' 00:07:46.503 23:25:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:46.503 23:25:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:47.073 23:25:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:07:47.073 23:25:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:47.073 23:25:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:47.073 [2024-09-30 23:25:26.648984] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:47.073 [2024-09-30 23:25:26.649512] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:07:47.073 [2024-09-30 23:25:26.649624] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:07:47.073 BaseBdev2 00:07:47.073 [2024-09-30 23:25:26.650283] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:07:47.073 [2024-09-30 23:25:26.650711] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:07:47.073 [2024-09-30 23:25:26.650826] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, ra 23:25:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:47.073 id_bdev 0x617000006980 00:07:47.073 [2024-09-30 23:25:26.651248] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:47.073 23:25:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:07:47.073 23:25:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:07:47.073 23:25:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:07:47.073 23:25:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:07:47.073 23:25:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:07:47.073 23:25:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:07:47.073 23:25:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:07:47.073 23:25:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:47.073 23:25:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:47.073 23:25:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:47.073 23:25:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:47.073 23:25:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:47.073 23:25:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:47.073 [ 00:07:47.073 { 00:07:47.073 "name": "BaseBdev2", 00:07:47.073 "aliases": [ 00:07:47.073 "ba5b699e-6796-4bdd-b358-adc67774027e" 00:07:47.073 ], 00:07:47.073 "product_name": "Malloc disk", 00:07:47.073 "block_size": 512, 00:07:47.073 "num_blocks": 65536, 00:07:47.073 "uuid": "ba5b699e-6796-4bdd-b358-adc67774027e", 00:07:47.073 "assigned_rate_limits": { 00:07:47.073 "rw_ios_per_sec": 0, 00:07:47.073 "rw_mbytes_per_sec": 0, 00:07:47.073 "r_mbytes_per_sec": 0, 00:07:47.073 "w_mbytes_per_sec": 0 00:07:47.073 }, 00:07:47.073 "claimed": true, 00:07:47.073 "claim_type": "exclusive_write", 00:07:47.073 "zoned": false, 00:07:47.073 "supported_io_types": { 00:07:47.073 "read": true, 00:07:47.073 "write": true, 00:07:47.073 "unmap": true, 00:07:47.073 "flush": true, 00:07:47.073 "reset": true, 00:07:47.073 "nvme_admin": false, 00:07:47.073 "nvme_io": false, 00:07:47.073 "nvme_io_md": false, 00:07:47.073 "write_zeroes": true, 00:07:47.073 "zcopy": true, 00:07:47.073 "get_zone_info": false, 00:07:47.073 "zone_management": false, 00:07:47.073 "zone_append": false, 00:07:47.073 "compare": false, 00:07:47.073 "compare_and_write": false, 00:07:47.073 "abort": true, 00:07:47.073 "seek_hole": false, 00:07:47.073 "seek_data": false, 00:07:47.073 "copy": true, 00:07:47.073 "nvme_iov_md": false 00:07:47.073 }, 00:07:47.073 "memory_domains": [ 00:07:47.073 { 00:07:47.073 "dma_device_id": "system", 00:07:47.073 "dma_device_type": 1 00:07:47.073 }, 00:07:47.073 { 00:07:47.073 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:47.073 "dma_device_type": 2 00:07:47.073 } 00:07:47.073 ], 00:07:47.073 "driver_specific": {} 00:07:47.073 } 00:07:47.073 ] 00:07:47.073 23:25:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:47.073 23:25:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:07:47.073 23:25:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:07:47.073 23:25:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:47.073 23:25:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:07:47.073 23:25:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:47.074 23:25:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:47.074 23:25:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:47.074 23:25:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:47.074 23:25:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:47.074 23:25:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:47.074 23:25:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:47.074 23:25:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:47.074 23:25:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:47.074 23:25:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:47.074 23:25:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:47.074 23:25:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:47.074 23:25:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:47.074 23:25:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:47.074 23:25:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:47.074 "name": "Existed_Raid", 00:07:47.074 "uuid": "123e6da2-71a7-4a0a-955a-737b7cf70505", 00:07:47.074 "strip_size_kb": 0, 00:07:47.074 "state": "online", 00:07:47.074 "raid_level": "raid1", 00:07:47.074 "superblock": true, 00:07:47.074 "num_base_bdevs": 2, 00:07:47.074 "num_base_bdevs_discovered": 2, 00:07:47.074 "num_base_bdevs_operational": 2, 00:07:47.074 "base_bdevs_list": [ 00:07:47.074 { 00:07:47.074 "name": "BaseBdev1", 00:07:47.074 "uuid": "eea4fe23-7b0b-4080-88a8-803c7103117b", 00:07:47.074 "is_configured": true, 00:07:47.074 "data_offset": 2048, 00:07:47.074 "data_size": 63488 00:07:47.074 }, 00:07:47.074 { 00:07:47.074 "name": "BaseBdev2", 00:07:47.074 "uuid": "ba5b699e-6796-4bdd-b358-adc67774027e", 00:07:47.074 "is_configured": true, 00:07:47.074 "data_offset": 2048, 00:07:47.074 "data_size": 63488 00:07:47.074 } 00:07:47.074 ] 00:07:47.074 }' 00:07:47.074 23:25:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:47.074 23:25:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:47.333 23:25:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:07:47.333 23:25:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:07:47.333 23:25:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:47.333 23:25:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:47.333 23:25:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:07:47.333 23:25:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:47.333 23:25:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:47.333 23:25:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:07:47.333 23:25:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:47.333 23:25:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:47.333 [2024-09-30 23:25:27.156361] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:47.333 23:25:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:47.333 23:25:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:47.333 "name": "Existed_Raid", 00:07:47.333 "aliases": [ 00:07:47.333 "123e6da2-71a7-4a0a-955a-737b7cf70505" 00:07:47.333 ], 00:07:47.333 "product_name": "Raid Volume", 00:07:47.333 "block_size": 512, 00:07:47.333 "num_blocks": 63488, 00:07:47.333 "uuid": "123e6da2-71a7-4a0a-955a-737b7cf70505", 00:07:47.333 "assigned_rate_limits": { 00:07:47.333 "rw_ios_per_sec": 0, 00:07:47.333 "rw_mbytes_per_sec": 0, 00:07:47.333 "r_mbytes_per_sec": 0, 00:07:47.333 "w_mbytes_per_sec": 0 00:07:47.333 }, 00:07:47.333 "claimed": false, 00:07:47.333 "zoned": false, 00:07:47.333 "supported_io_types": { 00:07:47.333 "read": true, 00:07:47.333 "write": true, 00:07:47.333 "unmap": false, 00:07:47.333 "flush": false, 00:07:47.333 "reset": true, 00:07:47.333 "nvme_admin": false, 00:07:47.333 "nvme_io": false, 00:07:47.333 "nvme_io_md": false, 00:07:47.333 "write_zeroes": true, 00:07:47.333 "zcopy": false, 00:07:47.333 "get_zone_info": false, 00:07:47.333 "zone_management": false, 00:07:47.333 "zone_append": false, 00:07:47.333 "compare": false, 00:07:47.333 "compare_and_write": false, 00:07:47.333 "abort": false, 00:07:47.333 "seek_hole": false, 00:07:47.333 "seek_data": false, 00:07:47.333 "copy": false, 00:07:47.333 "nvme_iov_md": false 00:07:47.333 }, 00:07:47.333 "memory_domains": [ 00:07:47.333 { 00:07:47.333 "dma_device_id": "system", 00:07:47.333 "dma_device_type": 1 00:07:47.333 }, 00:07:47.333 { 00:07:47.333 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:47.333 "dma_device_type": 2 00:07:47.333 }, 00:07:47.333 { 00:07:47.333 "dma_device_id": "system", 00:07:47.333 "dma_device_type": 1 00:07:47.333 }, 00:07:47.333 { 00:07:47.333 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:47.333 "dma_device_type": 2 00:07:47.333 } 00:07:47.333 ], 00:07:47.333 "driver_specific": { 00:07:47.333 "raid": { 00:07:47.333 "uuid": "123e6da2-71a7-4a0a-955a-737b7cf70505", 00:07:47.333 "strip_size_kb": 0, 00:07:47.333 "state": "online", 00:07:47.333 "raid_level": "raid1", 00:07:47.333 "superblock": true, 00:07:47.333 "num_base_bdevs": 2, 00:07:47.333 "num_base_bdevs_discovered": 2, 00:07:47.333 "num_base_bdevs_operational": 2, 00:07:47.333 "base_bdevs_list": [ 00:07:47.333 { 00:07:47.333 "name": "BaseBdev1", 00:07:47.333 "uuid": "eea4fe23-7b0b-4080-88a8-803c7103117b", 00:07:47.333 "is_configured": true, 00:07:47.333 "data_offset": 2048, 00:07:47.333 "data_size": 63488 00:07:47.333 }, 00:07:47.333 { 00:07:47.333 "name": "BaseBdev2", 00:07:47.333 "uuid": "ba5b699e-6796-4bdd-b358-adc67774027e", 00:07:47.333 "is_configured": true, 00:07:47.333 "data_offset": 2048, 00:07:47.333 "data_size": 63488 00:07:47.333 } 00:07:47.333 ] 00:07:47.333 } 00:07:47.333 } 00:07:47.333 }' 00:07:47.333 23:25:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:47.593 23:25:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:07:47.593 BaseBdev2' 00:07:47.593 23:25:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:47.593 23:25:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:47.593 23:25:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:47.593 23:25:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:07:47.593 23:25:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:47.593 23:25:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:47.593 23:25:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:47.593 23:25:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:47.593 23:25:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:47.593 23:25:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:47.593 23:25:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:47.593 23:25:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:07:47.593 23:25:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:47.593 23:25:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:47.593 23:25:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:47.593 23:25:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:47.593 23:25:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:47.593 23:25:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:47.593 23:25:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:07:47.593 23:25:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:47.593 23:25:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:47.593 [2024-09-30 23:25:27.343827] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:47.593 23:25:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:47.593 23:25:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:07:47.593 23:25:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:07:47.593 23:25:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:47.593 23:25:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:07:47.593 23:25:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:07:47.593 23:25:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:07:47.593 23:25:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:47.593 23:25:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:47.593 23:25:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:47.593 23:25:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:47.593 23:25:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:47.593 23:25:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:47.593 23:25:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:47.593 23:25:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:47.593 23:25:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:47.593 23:25:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:47.593 23:25:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:47.593 23:25:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:47.593 23:25:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:47.593 23:25:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:47.593 23:25:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:47.593 "name": "Existed_Raid", 00:07:47.593 "uuid": "123e6da2-71a7-4a0a-955a-737b7cf70505", 00:07:47.593 "strip_size_kb": 0, 00:07:47.593 "state": "online", 00:07:47.593 "raid_level": "raid1", 00:07:47.593 "superblock": true, 00:07:47.593 "num_base_bdevs": 2, 00:07:47.593 "num_base_bdevs_discovered": 1, 00:07:47.593 "num_base_bdevs_operational": 1, 00:07:47.593 "base_bdevs_list": [ 00:07:47.593 { 00:07:47.593 "name": null, 00:07:47.593 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:47.593 "is_configured": false, 00:07:47.593 "data_offset": 0, 00:07:47.593 "data_size": 63488 00:07:47.593 }, 00:07:47.593 { 00:07:47.593 "name": "BaseBdev2", 00:07:47.593 "uuid": "ba5b699e-6796-4bdd-b358-adc67774027e", 00:07:47.593 "is_configured": true, 00:07:47.593 "data_offset": 2048, 00:07:47.593 "data_size": 63488 00:07:47.593 } 00:07:47.593 ] 00:07:47.593 }' 00:07:47.593 23:25:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:47.593 23:25:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:48.163 23:25:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:07:48.163 23:25:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:48.163 23:25:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:48.163 23:25:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:07:48.163 23:25:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:48.163 23:25:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:48.163 23:25:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:48.163 23:25:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:07:48.163 23:25:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:48.163 23:25:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:07:48.163 23:25:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:48.163 23:25:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:48.163 [2024-09-30 23:25:27.862258] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:48.163 [2024-09-30 23:25:27.862409] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:48.163 [2024-09-30 23:25:27.873837] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:48.163 [2024-09-30 23:25:27.873964] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:48.163 [2024-09-30 23:25:27.874013] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline 00:07:48.163 23:25:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:48.163 23:25:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:07:48.163 23:25:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:48.163 23:25:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:48.163 23:25:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:48.163 23:25:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:48.163 23:25:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:07:48.163 23:25:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:48.163 23:25:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:07:48.163 23:25:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:07:48.163 23:25:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:07:48.163 23:25:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 74273 00:07:48.163 23:25:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 74273 ']' 00:07:48.163 23:25:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 74273 00:07:48.163 23:25:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:07:48.163 23:25:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:48.163 23:25:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 74273 00:07:48.163 23:25:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:48.163 23:25:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:48.163 23:25:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 74273' 00:07:48.163 killing process with pid 74273 00:07:48.163 23:25:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 74273 00:07:48.163 [2024-09-30 23:25:27.971794] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:48.163 23:25:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 74273 00:07:48.163 [2024-09-30 23:25:27.972789] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:48.423 23:25:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:07:48.423 00:07:48.423 real 0m3.754s 00:07:48.423 user 0m5.833s 00:07:48.423 sys 0m0.786s 00:07:48.423 ************************************ 00:07:48.423 END TEST raid_state_function_test_sb 00:07:48.423 ************************************ 00:07:48.423 23:25:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:48.423 23:25:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:48.423 23:25:28 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 2 00:07:48.423 23:25:28 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:07:48.423 23:25:28 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:48.423 23:25:28 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:48.684 ************************************ 00:07:48.684 START TEST raid_superblock_test 00:07:48.684 ************************************ 00:07:48.684 23:25:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test raid1 2 00:07:48.684 23:25:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:07:48.684 23:25:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:07:48.684 23:25:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:07:48.684 23:25:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:07:48.684 23:25:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:07:48.684 23:25:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:07:48.684 23:25:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:07:48.684 23:25:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:07:48.684 23:25:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:07:48.684 23:25:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:07:48.684 23:25:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:07:48.684 23:25:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:07:48.684 23:25:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:07:48.684 23:25:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:07:48.684 23:25:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:07:48.684 23:25:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=74514 00:07:48.684 23:25:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:07:48.684 23:25:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 74514 00:07:48.684 23:25:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 74514 ']' 00:07:48.684 23:25:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:48.684 23:25:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:48.684 23:25:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:48.684 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:48.684 23:25:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:48.684 23:25:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:48.684 [2024-09-30 23:25:28.386809] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:07:48.684 [2024-09-30 23:25:28.387066] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74514 ] 00:07:48.944 [2024-09-30 23:25:28.552169] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:48.944 [2024-09-30 23:25:28.596598] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:48.944 [2024-09-30 23:25:28.638566] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:48.944 [2024-09-30 23:25:28.638688] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:49.513 23:25:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:49.513 23:25:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:07:49.513 23:25:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:07:49.513 23:25:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:49.513 23:25:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:07:49.513 23:25:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:07:49.513 23:25:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:07:49.513 23:25:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:07:49.513 23:25:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:07:49.513 23:25:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:07:49.513 23:25:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:07:49.513 23:25:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:49.513 23:25:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:49.513 malloc1 00:07:49.513 23:25:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:49.513 23:25:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:49.513 23:25:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:49.513 23:25:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:49.513 [2024-09-30 23:25:29.224471] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:49.513 [2024-09-30 23:25:29.224626] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:49.513 [2024-09-30 23:25:29.224649] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:07:49.513 [2024-09-30 23:25:29.224672] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:49.513 [2024-09-30 23:25:29.226743] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:49.513 [2024-09-30 23:25:29.226786] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:49.513 pt1 00:07:49.513 23:25:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:49.513 23:25:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:07:49.513 23:25:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:49.513 23:25:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:07:49.513 23:25:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:07:49.513 23:25:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:07:49.513 23:25:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:07:49.513 23:25:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:07:49.513 23:25:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:07:49.513 23:25:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:07:49.513 23:25:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:49.513 23:25:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:49.513 malloc2 00:07:49.514 23:25:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:49.514 23:25:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:49.514 23:25:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:49.514 23:25:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:49.514 [2024-09-30 23:25:29.263593] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:49.514 [2024-09-30 23:25:29.263734] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:49.514 [2024-09-30 23:25:29.263779] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:07:49.514 [2024-09-30 23:25:29.263827] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:49.514 [2024-09-30 23:25:29.266648] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:49.514 [2024-09-30 23:25:29.266744] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:49.514 pt2 00:07:49.514 23:25:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:49.514 23:25:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:07:49.514 23:25:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:49.514 23:25:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:07:49.514 23:25:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:49.514 23:25:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:49.514 [2024-09-30 23:25:29.275622] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:49.514 [2024-09-30 23:25:29.277470] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:49.514 [2024-09-30 23:25:29.277637] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:07:49.514 [2024-09-30 23:25:29.277684] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:07:49.514 [2024-09-30 23:25:29.277980] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:07:49.514 [2024-09-30 23:25:29.278149] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:07:49.514 [2024-09-30 23:25:29.278189] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:07:49.514 [2024-09-30 23:25:29.278356] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:49.514 23:25:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:49.514 23:25:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:07:49.514 23:25:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:49.514 23:25:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:49.514 23:25:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:49.514 23:25:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:49.514 23:25:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:49.514 23:25:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:49.514 23:25:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:49.514 23:25:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:49.514 23:25:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:49.514 23:25:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:49.514 23:25:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:49.514 23:25:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:49.514 23:25:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:49.514 23:25:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:49.514 23:25:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:49.514 "name": "raid_bdev1", 00:07:49.514 "uuid": "2f2dac1a-108c-4481-ab37-3bb3959f7d5c", 00:07:49.514 "strip_size_kb": 0, 00:07:49.514 "state": "online", 00:07:49.514 "raid_level": "raid1", 00:07:49.514 "superblock": true, 00:07:49.514 "num_base_bdevs": 2, 00:07:49.514 "num_base_bdevs_discovered": 2, 00:07:49.514 "num_base_bdevs_operational": 2, 00:07:49.514 "base_bdevs_list": [ 00:07:49.514 { 00:07:49.514 "name": "pt1", 00:07:49.514 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:49.514 "is_configured": true, 00:07:49.514 "data_offset": 2048, 00:07:49.514 "data_size": 63488 00:07:49.514 }, 00:07:49.514 { 00:07:49.514 "name": "pt2", 00:07:49.514 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:49.514 "is_configured": true, 00:07:49.514 "data_offset": 2048, 00:07:49.514 "data_size": 63488 00:07:49.514 } 00:07:49.514 ] 00:07:49.514 }' 00:07:49.514 23:25:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:49.514 23:25:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:50.082 23:25:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:07:50.082 23:25:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:07:50.082 23:25:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:50.082 23:25:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:50.082 23:25:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:50.083 23:25:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:50.083 23:25:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:50.083 23:25:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:50.083 23:25:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:50.083 23:25:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:50.083 [2024-09-30 23:25:29.723074] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:50.083 23:25:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:50.083 23:25:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:50.083 "name": "raid_bdev1", 00:07:50.083 "aliases": [ 00:07:50.083 "2f2dac1a-108c-4481-ab37-3bb3959f7d5c" 00:07:50.083 ], 00:07:50.083 "product_name": "Raid Volume", 00:07:50.083 "block_size": 512, 00:07:50.083 "num_blocks": 63488, 00:07:50.083 "uuid": "2f2dac1a-108c-4481-ab37-3bb3959f7d5c", 00:07:50.083 "assigned_rate_limits": { 00:07:50.083 "rw_ios_per_sec": 0, 00:07:50.083 "rw_mbytes_per_sec": 0, 00:07:50.083 "r_mbytes_per_sec": 0, 00:07:50.083 "w_mbytes_per_sec": 0 00:07:50.083 }, 00:07:50.083 "claimed": false, 00:07:50.083 "zoned": false, 00:07:50.083 "supported_io_types": { 00:07:50.083 "read": true, 00:07:50.083 "write": true, 00:07:50.083 "unmap": false, 00:07:50.083 "flush": false, 00:07:50.083 "reset": true, 00:07:50.083 "nvme_admin": false, 00:07:50.083 "nvme_io": false, 00:07:50.083 "nvme_io_md": false, 00:07:50.083 "write_zeroes": true, 00:07:50.083 "zcopy": false, 00:07:50.083 "get_zone_info": false, 00:07:50.083 "zone_management": false, 00:07:50.083 "zone_append": false, 00:07:50.083 "compare": false, 00:07:50.083 "compare_and_write": false, 00:07:50.083 "abort": false, 00:07:50.083 "seek_hole": false, 00:07:50.083 "seek_data": false, 00:07:50.083 "copy": false, 00:07:50.083 "nvme_iov_md": false 00:07:50.083 }, 00:07:50.083 "memory_domains": [ 00:07:50.083 { 00:07:50.083 "dma_device_id": "system", 00:07:50.083 "dma_device_type": 1 00:07:50.083 }, 00:07:50.083 { 00:07:50.083 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:50.083 "dma_device_type": 2 00:07:50.083 }, 00:07:50.083 { 00:07:50.083 "dma_device_id": "system", 00:07:50.083 "dma_device_type": 1 00:07:50.083 }, 00:07:50.083 { 00:07:50.083 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:50.083 "dma_device_type": 2 00:07:50.083 } 00:07:50.083 ], 00:07:50.083 "driver_specific": { 00:07:50.083 "raid": { 00:07:50.083 "uuid": "2f2dac1a-108c-4481-ab37-3bb3959f7d5c", 00:07:50.083 "strip_size_kb": 0, 00:07:50.083 "state": "online", 00:07:50.083 "raid_level": "raid1", 00:07:50.083 "superblock": true, 00:07:50.083 "num_base_bdevs": 2, 00:07:50.083 "num_base_bdevs_discovered": 2, 00:07:50.083 "num_base_bdevs_operational": 2, 00:07:50.083 "base_bdevs_list": [ 00:07:50.083 { 00:07:50.083 "name": "pt1", 00:07:50.083 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:50.083 "is_configured": true, 00:07:50.083 "data_offset": 2048, 00:07:50.083 "data_size": 63488 00:07:50.083 }, 00:07:50.083 { 00:07:50.083 "name": "pt2", 00:07:50.083 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:50.083 "is_configured": true, 00:07:50.083 "data_offset": 2048, 00:07:50.083 "data_size": 63488 00:07:50.083 } 00:07:50.083 ] 00:07:50.083 } 00:07:50.083 } 00:07:50.083 }' 00:07:50.083 23:25:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:50.083 23:25:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:07:50.083 pt2' 00:07:50.083 23:25:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:50.083 23:25:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:50.083 23:25:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:50.083 23:25:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:07:50.083 23:25:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:50.083 23:25:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:50.083 23:25:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:50.083 23:25:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:50.083 23:25:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:50.083 23:25:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:50.083 23:25:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:50.083 23:25:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:07:50.083 23:25:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:50.083 23:25:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:50.083 23:25:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:50.083 23:25:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:50.343 23:25:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:50.343 23:25:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:50.343 23:25:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:07:50.343 23:25:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:50.343 23:25:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:50.343 23:25:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:50.343 [2024-09-30 23:25:29.958689] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:50.343 23:25:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:50.343 23:25:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=2f2dac1a-108c-4481-ab37-3bb3959f7d5c 00:07:50.343 23:25:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 2f2dac1a-108c-4481-ab37-3bb3959f7d5c ']' 00:07:50.343 23:25:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:50.343 23:25:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:50.343 23:25:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:50.343 [2024-09-30 23:25:29.986407] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:50.343 [2024-09-30 23:25:29.986479] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:50.343 [2024-09-30 23:25:29.986575] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:50.343 [2024-09-30 23:25:29.986682] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:50.343 [2024-09-30 23:25:29.986738] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:07:50.343 23:25:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:50.343 23:25:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:50.343 23:25:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:07:50.343 23:25:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:50.343 23:25:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:50.343 23:25:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:50.344 23:25:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:07:50.344 23:25:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:07:50.344 23:25:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:07:50.344 23:25:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:07:50.344 23:25:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:50.344 23:25:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:50.344 23:25:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:50.344 23:25:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:07:50.344 23:25:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:07:50.344 23:25:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:50.344 23:25:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:50.344 23:25:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:50.344 23:25:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:07:50.344 23:25:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:07:50.344 23:25:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:50.344 23:25:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:50.344 23:25:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:50.344 23:25:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:07:50.344 23:25:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:50.344 23:25:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:07:50.344 23:25:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:50.344 23:25:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:07:50.344 23:25:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:50.344 23:25:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:07:50.344 23:25:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:50.344 23:25:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:50.344 23:25:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:50.344 23:25:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:50.344 [2024-09-30 23:25:30.130213] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:07:50.344 [2024-09-30 23:25:30.132116] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:07:50.344 [2024-09-30 23:25:30.132233] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:07:50.344 [2024-09-30 23:25:30.132325] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:07:50.344 [2024-09-30 23:25:30.132404] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:50.344 [2024-09-30 23:25:30.132439] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state configuring 00:07:50.344 request: 00:07:50.344 { 00:07:50.344 "name": "raid_bdev1", 00:07:50.344 "raid_level": "raid1", 00:07:50.344 "base_bdevs": [ 00:07:50.344 "malloc1", 00:07:50.344 "malloc2" 00:07:50.344 ], 00:07:50.344 "superblock": false, 00:07:50.344 "method": "bdev_raid_create", 00:07:50.344 "req_id": 1 00:07:50.344 } 00:07:50.344 Got JSON-RPC error response 00:07:50.344 response: 00:07:50.344 { 00:07:50.344 "code": -17, 00:07:50.344 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:07:50.344 } 00:07:50.344 23:25:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:07:50.344 23:25:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:07:50.344 23:25:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:50.344 23:25:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:50.344 23:25:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:50.344 23:25:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:50.344 23:25:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:07:50.344 23:25:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:50.344 23:25:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:50.344 23:25:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:50.344 23:25:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:07:50.344 23:25:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:07:50.344 23:25:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:50.344 23:25:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:50.344 23:25:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:50.344 [2024-09-30 23:25:30.186068] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:50.344 [2024-09-30 23:25:30.186158] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:50.344 [2024-09-30 23:25:30.186194] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:07:50.344 [2024-09-30 23:25:30.186218] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:50.344 [2024-09-30 23:25:30.188305] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:50.344 [2024-09-30 23:25:30.188389] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:50.344 [2024-09-30 23:25:30.188492] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:07:50.344 [2024-09-30 23:25:30.188560] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:50.344 pt1 00:07:50.344 23:25:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:50.344 23:25:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:07:50.344 23:25:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:50.344 23:25:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:50.344 23:25:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:50.344 23:25:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:50.344 23:25:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:50.344 23:25:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:50.344 23:25:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:50.344 23:25:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:50.344 23:25:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:50.603 23:25:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:50.603 23:25:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:50.603 23:25:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:50.603 23:25:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:50.603 23:25:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:50.603 23:25:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:50.603 "name": "raid_bdev1", 00:07:50.603 "uuid": "2f2dac1a-108c-4481-ab37-3bb3959f7d5c", 00:07:50.603 "strip_size_kb": 0, 00:07:50.603 "state": "configuring", 00:07:50.603 "raid_level": "raid1", 00:07:50.603 "superblock": true, 00:07:50.603 "num_base_bdevs": 2, 00:07:50.603 "num_base_bdevs_discovered": 1, 00:07:50.603 "num_base_bdevs_operational": 2, 00:07:50.603 "base_bdevs_list": [ 00:07:50.603 { 00:07:50.603 "name": "pt1", 00:07:50.603 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:50.603 "is_configured": true, 00:07:50.603 "data_offset": 2048, 00:07:50.603 "data_size": 63488 00:07:50.603 }, 00:07:50.603 { 00:07:50.603 "name": null, 00:07:50.603 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:50.603 "is_configured": false, 00:07:50.603 "data_offset": 2048, 00:07:50.603 "data_size": 63488 00:07:50.603 } 00:07:50.603 ] 00:07:50.603 }' 00:07:50.603 23:25:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:50.603 23:25:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:50.862 23:25:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:07:50.862 23:25:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:07:50.862 23:25:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:07:50.862 23:25:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:50.862 23:25:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:50.862 23:25:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:50.862 [2024-09-30 23:25:30.629303] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:50.862 [2024-09-30 23:25:30.629406] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:50.862 [2024-09-30 23:25:30.629443] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:07:50.862 [2024-09-30 23:25:30.629469] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:50.862 [2024-09-30 23:25:30.629868] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:50.862 [2024-09-30 23:25:30.629933] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:50.862 [2024-09-30 23:25:30.630018] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:07:50.862 [2024-09-30 23:25:30.630061] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:50.862 [2024-09-30 23:25:30.630172] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:07:50.862 [2024-09-30 23:25:30.630210] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:07:50.862 [2024-09-30 23:25:30.630441] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:07:50.862 [2024-09-30 23:25:30.630595] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:07:50.862 [2024-09-30 23:25:30.630641] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006980 00:07:50.862 [2024-09-30 23:25:30.630771] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:50.862 pt2 00:07:50.862 23:25:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:50.862 23:25:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:07:50.862 23:25:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:07:50.862 23:25:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:07:50.862 23:25:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:50.862 23:25:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:50.862 23:25:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:50.862 23:25:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:50.862 23:25:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:50.862 23:25:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:50.862 23:25:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:50.862 23:25:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:50.862 23:25:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:50.862 23:25:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:50.862 23:25:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:50.862 23:25:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:50.862 23:25:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:50.862 23:25:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:50.862 23:25:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:50.862 "name": "raid_bdev1", 00:07:50.862 "uuid": "2f2dac1a-108c-4481-ab37-3bb3959f7d5c", 00:07:50.862 "strip_size_kb": 0, 00:07:50.862 "state": "online", 00:07:50.862 "raid_level": "raid1", 00:07:50.862 "superblock": true, 00:07:50.862 "num_base_bdevs": 2, 00:07:50.862 "num_base_bdevs_discovered": 2, 00:07:50.862 "num_base_bdevs_operational": 2, 00:07:50.862 "base_bdevs_list": [ 00:07:50.862 { 00:07:50.862 "name": "pt1", 00:07:50.862 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:50.862 "is_configured": true, 00:07:50.862 "data_offset": 2048, 00:07:50.862 "data_size": 63488 00:07:50.862 }, 00:07:50.862 { 00:07:50.862 "name": "pt2", 00:07:50.862 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:50.862 "is_configured": true, 00:07:50.862 "data_offset": 2048, 00:07:50.862 "data_size": 63488 00:07:50.862 } 00:07:50.862 ] 00:07:50.862 }' 00:07:50.862 23:25:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:50.862 23:25:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:51.430 23:25:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:07:51.430 23:25:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:07:51.430 23:25:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:51.430 23:25:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:51.430 23:25:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:51.430 23:25:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:51.430 23:25:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:51.430 23:25:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:51.430 23:25:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:51.430 23:25:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:51.430 [2024-09-30 23:25:31.068783] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:51.430 23:25:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:51.430 23:25:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:51.430 "name": "raid_bdev1", 00:07:51.430 "aliases": [ 00:07:51.430 "2f2dac1a-108c-4481-ab37-3bb3959f7d5c" 00:07:51.430 ], 00:07:51.430 "product_name": "Raid Volume", 00:07:51.430 "block_size": 512, 00:07:51.430 "num_blocks": 63488, 00:07:51.430 "uuid": "2f2dac1a-108c-4481-ab37-3bb3959f7d5c", 00:07:51.430 "assigned_rate_limits": { 00:07:51.430 "rw_ios_per_sec": 0, 00:07:51.430 "rw_mbytes_per_sec": 0, 00:07:51.430 "r_mbytes_per_sec": 0, 00:07:51.430 "w_mbytes_per_sec": 0 00:07:51.430 }, 00:07:51.430 "claimed": false, 00:07:51.430 "zoned": false, 00:07:51.430 "supported_io_types": { 00:07:51.430 "read": true, 00:07:51.430 "write": true, 00:07:51.430 "unmap": false, 00:07:51.430 "flush": false, 00:07:51.430 "reset": true, 00:07:51.430 "nvme_admin": false, 00:07:51.430 "nvme_io": false, 00:07:51.430 "nvme_io_md": false, 00:07:51.430 "write_zeroes": true, 00:07:51.430 "zcopy": false, 00:07:51.430 "get_zone_info": false, 00:07:51.430 "zone_management": false, 00:07:51.430 "zone_append": false, 00:07:51.430 "compare": false, 00:07:51.430 "compare_and_write": false, 00:07:51.430 "abort": false, 00:07:51.430 "seek_hole": false, 00:07:51.430 "seek_data": false, 00:07:51.430 "copy": false, 00:07:51.430 "nvme_iov_md": false 00:07:51.430 }, 00:07:51.430 "memory_domains": [ 00:07:51.430 { 00:07:51.430 "dma_device_id": "system", 00:07:51.430 "dma_device_type": 1 00:07:51.430 }, 00:07:51.430 { 00:07:51.430 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:51.430 "dma_device_type": 2 00:07:51.430 }, 00:07:51.430 { 00:07:51.430 "dma_device_id": "system", 00:07:51.430 "dma_device_type": 1 00:07:51.430 }, 00:07:51.430 { 00:07:51.430 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:51.430 "dma_device_type": 2 00:07:51.430 } 00:07:51.430 ], 00:07:51.430 "driver_specific": { 00:07:51.430 "raid": { 00:07:51.430 "uuid": "2f2dac1a-108c-4481-ab37-3bb3959f7d5c", 00:07:51.430 "strip_size_kb": 0, 00:07:51.430 "state": "online", 00:07:51.430 "raid_level": "raid1", 00:07:51.430 "superblock": true, 00:07:51.430 "num_base_bdevs": 2, 00:07:51.430 "num_base_bdevs_discovered": 2, 00:07:51.430 "num_base_bdevs_operational": 2, 00:07:51.430 "base_bdevs_list": [ 00:07:51.430 { 00:07:51.430 "name": "pt1", 00:07:51.430 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:51.430 "is_configured": true, 00:07:51.430 "data_offset": 2048, 00:07:51.430 "data_size": 63488 00:07:51.430 }, 00:07:51.430 { 00:07:51.430 "name": "pt2", 00:07:51.430 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:51.430 "is_configured": true, 00:07:51.430 "data_offset": 2048, 00:07:51.430 "data_size": 63488 00:07:51.430 } 00:07:51.430 ] 00:07:51.430 } 00:07:51.430 } 00:07:51.430 }' 00:07:51.430 23:25:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:51.430 23:25:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:07:51.430 pt2' 00:07:51.430 23:25:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:51.430 23:25:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:51.430 23:25:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:51.430 23:25:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:07:51.430 23:25:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:51.430 23:25:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:51.430 23:25:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:51.430 23:25:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:51.430 23:25:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:51.430 23:25:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:51.430 23:25:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:51.430 23:25:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:07:51.430 23:25:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:51.430 23:25:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:51.430 23:25:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:51.430 23:25:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:51.690 23:25:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:51.690 23:25:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:51.690 23:25:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:07:51.690 23:25:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:51.690 23:25:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:51.690 23:25:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:51.690 [2024-09-30 23:25:31.312345] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:51.690 23:25:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:51.690 23:25:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 2f2dac1a-108c-4481-ab37-3bb3959f7d5c '!=' 2f2dac1a-108c-4481-ab37-3bb3959f7d5c ']' 00:07:51.690 23:25:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:07:51.690 23:25:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:51.690 23:25:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:07:51.690 23:25:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:07:51.690 23:25:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:51.690 23:25:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:51.690 [2024-09-30 23:25:31.356049] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:07:51.690 23:25:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:51.690 23:25:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:07:51.690 23:25:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:51.690 23:25:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:51.690 23:25:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:51.690 23:25:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:51.690 23:25:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:51.690 23:25:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:51.690 23:25:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:51.690 23:25:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:51.690 23:25:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:51.690 23:25:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:51.690 23:25:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:51.690 23:25:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:51.690 23:25:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:51.690 23:25:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:51.690 23:25:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:51.690 "name": "raid_bdev1", 00:07:51.690 "uuid": "2f2dac1a-108c-4481-ab37-3bb3959f7d5c", 00:07:51.690 "strip_size_kb": 0, 00:07:51.690 "state": "online", 00:07:51.690 "raid_level": "raid1", 00:07:51.690 "superblock": true, 00:07:51.690 "num_base_bdevs": 2, 00:07:51.690 "num_base_bdevs_discovered": 1, 00:07:51.690 "num_base_bdevs_operational": 1, 00:07:51.690 "base_bdevs_list": [ 00:07:51.690 { 00:07:51.690 "name": null, 00:07:51.690 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:51.690 "is_configured": false, 00:07:51.690 "data_offset": 0, 00:07:51.690 "data_size": 63488 00:07:51.690 }, 00:07:51.690 { 00:07:51.690 "name": "pt2", 00:07:51.690 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:51.690 "is_configured": true, 00:07:51.690 "data_offset": 2048, 00:07:51.690 "data_size": 63488 00:07:51.690 } 00:07:51.690 ] 00:07:51.690 }' 00:07:51.690 23:25:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:51.690 23:25:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:52.259 23:25:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:52.259 23:25:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:52.259 23:25:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:52.259 [2024-09-30 23:25:31.839199] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:52.259 [2024-09-30 23:25:31.839313] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:52.259 [2024-09-30 23:25:31.839418] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:52.259 [2024-09-30 23:25:31.839486] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:52.259 [2024-09-30 23:25:31.839526] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name raid_bdev1, state offline 00:07:52.259 23:25:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:52.259 23:25:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:52.259 23:25:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:52.259 23:25:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:52.259 23:25:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:07:52.259 23:25:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:52.259 23:25:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:07:52.259 23:25:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:07:52.259 23:25:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:07:52.259 23:25:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:07:52.259 23:25:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:07:52.259 23:25:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:52.259 23:25:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:52.259 23:25:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:52.259 23:25:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:07:52.259 23:25:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:07:52.259 23:25:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:07:52.259 23:25:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:07:52.259 23:25:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=1 00:07:52.259 23:25:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:52.259 23:25:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:52.259 23:25:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:52.259 [2024-09-30 23:25:31.915049] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:52.259 [2024-09-30 23:25:31.915148] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:52.259 [2024-09-30 23:25:31.915184] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:07:52.259 [2024-09-30 23:25:31.915211] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:52.259 [2024-09-30 23:25:31.917285] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:52.259 [2024-09-30 23:25:31.917355] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:52.259 [2024-09-30 23:25:31.917449] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:07:52.259 [2024-09-30 23:25:31.917502] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:52.259 [2024-09-30 23:25:31.917618] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:07:52.260 [2024-09-30 23:25:31.917651] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:07:52.260 [2024-09-30 23:25:31.917885] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:52.260 [2024-09-30 23:25:31.918037] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:07:52.260 [2024-09-30 23:25:31.918079] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006d00 00:07:52.260 [2024-09-30 23:25:31.918209] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:52.260 pt2 00:07:52.260 23:25:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:52.260 23:25:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:07:52.260 23:25:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:52.260 23:25:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:52.260 23:25:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:52.260 23:25:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:52.260 23:25:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:52.260 23:25:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:52.260 23:25:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:52.260 23:25:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:52.260 23:25:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:52.260 23:25:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:52.260 23:25:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:52.260 23:25:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:52.260 23:25:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:52.260 23:25:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:52.260 23:25:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:52.260 "name": "raid_bdev1", 00:07:52.260 "uuid": "2f2dac1a-108c-4481-ab37-3bb3959f7d5c", 00:07:52.260 "strip_size_kb": 0, 00:07:52.260 "state": "online", 00:07:52.260 "raid_level": "raid1", 00:07:52.260 "superblock": true, 00:07:52.260 "num_base_bdevs": 2, 00:07:52.260 "num_base_bdevs_discovered": 1, 00:07:52.260 "num_base_bdevs_operational": 1, 00:07:52.260 "base_bdevs_list": [ 00:07:52.260 { 00:07:52.260 "name": null, 00:07:52.260 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:52.260 "is_configured": false, 00:07:52.260 "data_offset": 2048, 00:07:52.260 "data_size": 63488 00:07:52.260 }, 00:07:52.260 { 00:07:52.260 "name": "pt2", 00:07:52.260 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:52.260 "is_configured": true, 00:07:52.260 "data_offset": 2048, 00:07:52.260 "data_size": 63488 00:07:52.260 } 00:07:52.260 ] 00:07:52.260 }' 00:07:52.260 23:25:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:52.260 23:25:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:52.519 23:25:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:52.519 23:25:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:52.519 23:25:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:52.519 [2024-09-30 23:25:32.342408] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:52.519 [2024-09-30 23:25:32.342477] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:52.519 [2024-09-30 23:25:32.342552] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:52.519 [2024-09-30 23:25:32.342613] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:52.519 [2024-09-30 23:25:32.342646] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name raid_bdev1, state offline 00:07:52.519 23:25:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:52.519 23:25:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:52.519 23:25:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:52.519 23:25:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:07:52.519 23:25:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:52.519 23:25:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:52.779 23:25:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:07:52.779 23:25:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:07:52.779 23:25:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:07:52.779 23:25:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:52.779 23:25:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:52.779 23:25:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:52.779 [2024-09-30 23:25:32.402264] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:52.779 [2024-09-30 23:25:32.402360] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:52.779 [2024-09-30 23:25:32.402415] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:07:52.779 [2024-09-30 23:25:32.402451] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:52.779 [2024-09-30 23:25:32.404524] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:52.779 [2024-09-30 23:25:32.404594] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:52.779 [2024-09-30 23:25:32.404688] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:07:52.779 [2024-09-30 23:25:32.404743] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:52.779 [2024-09-30 23:25:32.404857] bdev_raid.c:3675:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:07:52.779 [2024-09-30 23:25:32.404925] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:52.779 [2024-09-30 23:25:32.405023] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007080 name raid_bdev1, state configuring 00:07:52.779 [2024-09-30 23:25:32.405091] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:52.779 [2024-09-30 23:25:32.405194] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007400 00:07:52.779 [2024-09-30 23:25:32.405230] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:07:52.779 [2024-09-30 23:25:32.405459] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:07:52.779 pt1 00:07:52.779 [2024-09-30 23:25:32.405602] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007400 00:07:52.779 [2024-09-30 23:25:32.405615] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007400 00:07:52.779 [2024-09-30 23:25:32.405719] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:52.779 23:25:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:52.779 23:25:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:07:52.779 23:25:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:07:52.779 23:25:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:52.779 23:25:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:52.779 23:25:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:52.779 23:25:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:52.779 23:25:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:52.779 23:25:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:52.779 23:25:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:52.779 23:25:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:52.779 23:25:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:52.779 23:25:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:52.779 23:25:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:52.779 23:25:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:52.779 23:25:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:52.779 23:25:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:52.779 23:25:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:52.779 "name": "raid_bdev1", 00:07:52.779 "uuid": "2f2dac1a-108c-4481-ab37-3bb3959f7d5c", 00:07:52.779 "strip_size_kb": 0, 00:07:52.779 "state": "online", 00:07:52.779 "raid_level": "raid1", 00:07:52.779 "superblock": true, 00:07:52.779 "num_base_bdevs": 2, 00:07:52.779 "num_base_bdevs_discovered": 1, 00:07:52.779 "num_base_bdevs_operational": 1, 00:07:52.779 "base_bdevs_list": [ 00:07:52.779 { 00:07:52.779 "name": null, 00:07:52.780 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:52.780 "is_configured": false, 00:07:52.780 "data_offset": 2048, 00:07:52.780 "data_size": 63488 00:07:52.780 }, 00:07:52.780 { 00:07:52.780 "name": "pt2", 00:07:52.780 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:52.780 "is_configured": true, 00:07:52.780 "data_offset": 2048, 00:07:52.780 "data_size": 63488 00:07:52.780 } 00:07:52.780 ] 00:07:52.780 }' 00:07:52.780 23:25:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:52.780 23:25:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:53.039 23:25:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:07:53.039 23:25:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:53.039 23:25:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:53.039 23:25:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:07:53.039 23:25:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:53.039 23:25:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:07:53.039 23:25:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:53.039 23:25:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:53.039 23:25:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:53.039 23:25:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:07:53.039 [2024-09-30 23:25:32.869726] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:53.039 23:25:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:53.298 23:25:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 2f2dac1a-108c-4481-ab37-3bb3959f7d5c '!=' 2f2dac1a-108c-4481-ab37-3bb3959f7d5c ']' 00:07:53.298 23:25:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 74514 00:07:53.298 23:25:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 74514 ']' 00:07:53.298 23:25:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 74514 00:07:53.298 23:25:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:07:53.298 23:25:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:53.298 23:25:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 74514 00:07:53.298 23:25:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:53.298 killing process with pid 74514 00:07:53.298 23:25:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:53.298 23:25:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 74514' 00:07:53.298 23:25:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 74514 00:07:53.298 [2024-09-30 23:25:32.956206] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:53.298 [2024-09-30 23:25:32.956280] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:53.298 [2024-09-30 23:25:32.956321] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:53.298 [2024-09-30 23:25:32.956330] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name raid_bdev1, state offline 00:07:53.298 23:25:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 74514 00:07:53.298 [2024-09-30 23:25:32.978594] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:53.558 23:25:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:07:53.558 00:07:53.558 real 0m4.932s 00:07:53.558 user 0m8.040s 00:07:53.558 sys 0m0.996s 00:07:53.558 ************************************ 00:07:53.558 END TEST raid_superblock_test 00:07:53.558 ************************************ 00:07:53.558 23:25:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:53.558 23:25:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:53.558 23:25:33 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 2 read 00:07:53.558 23:25:33 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:07:53.558 23:25:33 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:53.558 23:25:33 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:53.558 ************************************ 00:07:53.558 START TEST raid_read_error_test 00:07:53.559 ************************************ 00:07:53.559 23:25:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid1 2 read 00:07:53.559 23:25:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:07:53.559 23:25:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:07:53.559 23:25:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:07:53.559 23:25:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:07:53.559 23:25:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:53.559 23:25:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:07:53.559 23:25:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:53.559 23:25:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:53.559 23:25:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:07:53.559 23:25:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:53.559 23:25:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:53.559 23:25:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:53.559 23:25:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:07:53.559 23:25:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:07:53.559 23:25:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:07:53.559 23:25:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:07:53.559 23:25:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:07:53.559 23:25:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:07:53.559 23:25:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:07:53.559 23:25:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:07:53.559 23:25:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:07:53.559 23:25:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.ld8Qj64r17 00:07:53.559 23:25:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=74828 00:07:53.559 23:25:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:07:53.559 23:25:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 74828 00:07:53.559 23:25:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 74828 ']' 00:07:53.559 23:25:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:53.559 23:25:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:53.559 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:53.559 23:25:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:53.559 23:25:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:53.559 23:25:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:53.559 [2024-09-30 23:25:33.394454] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:07:53.559 [2024-09-30 23:25:33.394607] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74828 ] 00:07:53.819 [2024-09-30 23:25:33.553690] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:53.819 [2024-09-30 23:25:33.598705] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:53.819 [2024-09-30 23:25:33.640894] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:53.819 [2024-09-30 23:25:33.640928] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:54.388 23:25:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:54.388 23:25:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:07:54.388 23:25:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:54.388 23:25:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:07:54.388 23:25:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:54.388 23:25:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.388 BaseBdev1_malloc 00:07:54.388 23:25:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:54.388 23:25:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:07:54.388 23:25:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:54.388 23:25:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.648 true 00:07:54.648 23:25:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:54.648 23:25:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:07:54.648 23:25:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:54.648 23:25:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.648 [2024-09-30 23:25:34.250654] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:07:54.648 [2024-09-30 23:25:34.250794] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:54.648 [2024-09-30 23:25:34.250823] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:07:54.648 [2024-09-30 23:25:34.250840] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:54.648 [2024-09-30 23:25:34.252968] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:54.648 [2024-09-30 23:25:34.253011] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:07:54.648 BaseBdev1 00:07:54.648 23:25:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:54.648 23:25:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:54.648 23:25:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:07:54.648 23:25:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:54.648 23:25:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.648 BaseBdev2_malloc 00:07:54.648 23:25:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:54.648 23:25:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:07:54.648 23:25:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:54.648 23:25:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.648 true 00:07:54.648 23:25:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:54.648 23:25:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:07:54.648 23:25:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:54.648 23:25:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.648 [2024-09-30 23:25:34.305590] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:07:54.648 [2024-09-30 23:25:34.305747] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:54.648 [2024-09-30 23:25:34.305780] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:07:54.648 [2024-09-30 23:25:34.305794] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:54.648 [2024-09-30 23:25:34.308848] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:54.648 [2024-09-30 23:25:34.308915] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:07:54.648 BaseBdev2 00:07:54.648 23:25:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:54.648 23:25:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:07:54.648 23:25:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:54.648 23:25:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.648 [2024-09-30 23:25:34.317751] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:54.648 [2024-09-30 23:25:34.319816] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:54.648 [2024-09-30 23:25:34.320070] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:07:54.648 [2024-09-30 23:25:34.320124] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:07:54.648 [2024-09-30 23:25:34.320405] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:07:54.648 [2024-09-30 23:25:34.320604] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:07:54.648 [2024-09-30 23:25:34.320656] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006980 00:07:54.648 [2024-09-30 23:25:34.320841] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:54.648 23:25:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:54.648 23:25:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:07:54.648 23:25:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:54.648 23:25:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:54.648 23:25:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:54.648 23:25:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:54.648 23:25:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:54.648 23:25:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:54.648 23:25:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:54.648 23:25:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:54.648 23:25:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:54.648 23:25:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:54.648 23:25:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:54.648 23:25:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:54.648 23:25:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.648 23:25:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:54.648 23:25:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:54.648 "name": "raid_bdev1", 00:07:54.648 "uuid": "f0bb88e2-5e26-4305-abf3-3422e9e8d9cf", 00:07:54.648 "strip_size_kb": 0, 00:07:54.648 "state": "online", 00:07:54.648 "raid_level": "raid1", 00:07:54.648 "superblock": true, 00:07:54.648 "num_base_bdevs": 2, 00:07:54.648 "num_base_bdevs_discovered": 2, 00:07:54.648 "num_base_bdevs_operational": 2, 00:07:54.648 "base_bdevs_list": [ 00:07:54.648 { 00:07:54.648 "name": "BaseBdev1", 00:07:54.648 "uuid": "5c1023b4-c157-5246-af19-faa76c397d32", 00:07:54.648 "is_configured": true, 00:07:54.648 "data_offset": 2048, 00:07:54.648 "data_size": 63488 00:07:54.648 }, 00:07:54.648 { 00:07:54.648 "name": "BaseBdev2", 00:07:54.648 "uuid": "033e87e1-1c0f-59ec-9e42-d93a0d7507f9", 00:07:54.648 "is_configured": true, 00:07:54.648 "data_offset": 2048, 00:07:54.648 "data_size": 63488 00:07:54.648 } 00:07:54.648 ] 00:07:54.648 }' 00:07:54.648 23:25:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:54.648 23:25:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.907 23:25:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:07:54.907 23:25:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:07:55.166 [2024-09-30 23:25:34.813399] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:56.103 23:25:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:07:56.104 23:25:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:56.104 23:25:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.104 23:25:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:56.104 23:25:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:07:56.104 23:25:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:07:56.104 23:25:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:07:56.104 23:25:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:07:56.104 23:25:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:07:56.104 23:25:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:56.104 23:25:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:56.104 23:25:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:56.104 23:25:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:56.104 23:25:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:56.104 23:25:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:56.104 23:25:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:56.104 23:25:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:56.104 23:25:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:56.104 23:25:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:56.104 23:25:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:56.104 23:25:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:56.104 23:25:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.104 23:25:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:56.104 23:25:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:56.104 "name": "raid_bdev1", 00:07:56.104 "uuid": "f0bb88e2-5e26-4305-abf3-3422e9e8d9cf", 00:07:56.104 "strip_size_kb": 0, 00:07:56.104 "state": "online", 00:07:56.104 "raid_level": "raid1", 00:07:56.104 "superblock": true, 00:07:56.104 "num_base_bdevs": 2, 00:07:56.104 "num_base_bdevs_discovered": 2, 00:07:56.104 "num_base_bdevs_operational": 2, 00:07:56.104 "base_bdevs_list": [ 00:07:56.104 { 00:07:56.104 "name": "BaseBdev1", 00:07:56.104 "uuid": "5c1023b4-c157-5246-af19-faa76c397d32", 00:07:56.104 "is_configured": true, 00:07:56.104 "data_offset": 2048, 00:07:56.104 "data_size": 63488 00:07:56.104 }, 00:07:56.104 { 00:07:56.104 "name": "BaseBdev2", 00:07:56.104 "uuid": "033e87e1-1c0f-59ec-9e42-d93a0d7507f9", 00:07:56.104 "is_configured": true, 00:07:56.104 "data_offset": 2048, 00:07:56.104 "data_size": 63488 00:07:56.104 } 00:07:56.104 ] 00:07:56.104 }' 00:07:56.104 23:25:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:56.104 23:25:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.362 23:25:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:56.362 23:25:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:56.362 23:25:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.362 [2024-09-30 23:25:36.152511] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:56.362 [2024-09-30 23:25:36.152634] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:56.362 [2024-09-30 23:25:36.155172] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:56.362 { 00:07:56.362 "results": [ 00:07:56.362 { 00:07:56.362 "job": "raid_bdev1", 00:07:56.362 "core_mask": "0x1", 00:07:56.362 "workload": "randrw", 00:07:56.362 "percentage": 50, 00:07:56.362 "status": "finished", 00:07:56.362 "queue_depth": 1, 00:07:56.362 "io_size": 131072, 00:07:56.362 "runtime": 1.340079, 00:07:56.362 "iops": 20347.30788259498, 00:07:56.362 "mibps": 2543.4134853243727, 00:07:56.362 "io_failed": 0, 00:07:56.362 "io_timeout": 0, 00:07:56.362 "avg_latency_us": 46.739238611287405, 00:07:56.362 "min_latency_us": 21.463755458515283, 00:07:56.362 "max_latency_us": 1345.0620087336245 00:07:56.362 } 00:07:56.362 ], 00:07:56.362 "core_count": 1 00:07:56.362 } 00:07:56.362 [2024-09-30 23:25:36.155257] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:56.362 [2024-09-30 23:25:36.155342] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:56.362 [2024-09-30 23:25:36.155351] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name raid_bdev1, state offline 00:07:56.362 23:25:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:56.362 23:25:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 74828 00:07:56.362 23:25:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 74828 ']' 00:07:56.362 23:25:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 74828 00:07:56.362 23:25:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:07:56.362 23:25:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:56.362 23:25:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 74828 00:07:56.362 killing process with pid 74828 00:07:56.362 23:25:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:56.362 23:25:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:56.362 23:25:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 74828' 00:07:56.362 23:25:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 74828 00:07:56.362 [2024-09-30 23:25:36.202170] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:56.362 23:25:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 74828 00:07:56.622 [2024-09-30 23:25:36.218079] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:56.622 23:25:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.ld8Qj64r17 00:07:56.622 23:25:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:07:56.622 23:25:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:07:56.622 23:25:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:07:56.622 23:25:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:07:56.622 23:25:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:56.622 23:25:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:07:56.622 ************************************ 00:07:56.622 END TEST raid_read_error_test 00:07:56.622 ************************************ 00:07:56.622 23:25:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:07:56.622 00:07:56.622 real 0m3.166s 00:07:56.622 user 0m3.949s 00:07:56.622 sys 0m0.548s 00:07:56.622 23:25:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:56.622 23:25:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.881 23:25:36 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 2 write 00:07:56.881 23:25:36 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:07:56.881 23:25:36 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:56.881 23:25:36 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:56.881 ************************************ 00:07:56.881 START TEST raid_write_error_test 00:07:56.881 ************************************ 00:07:56.881 23:25:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid1 2 write 00:07:56.881 23:25:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:07:56.881 23:25:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:07:56.881 23:25:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:07:56.881 23:25:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:07:56.881 23:25:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:56.881 23:25:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:07:56.881 23:25:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:56.881 23:25:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:56.881 23:25:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:07:56.881 23:25:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:56.881 23:25:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:56.881 23:25:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:56.881 23:25:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:07:56.881 23:25:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:07:56.881 23:25:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:07:56.881 23:25:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:07:56.881 23:25:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:07:56.881 23:25:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:07:56.881 23:25:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:07:56.881 23:25:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:07:56.881 23:25:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:07:56.881 23:25:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.Jlgmsnbv2b 00:07:56.881 23:25:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=74961 00:07:56.881 23:25:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:07:56.881 23:25:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 74961 00:07:56.881 23:25:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 74961 ']' 00:07:56.881 23:25:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:56.881 23:25:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:56.881 23:25:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:56.881 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:56.881 23:25:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:56.881 23:25:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.881 [2024-09-30 23:25:36.645846] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:07:56.881 [2024-09-30 23:25:36.646001] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74961 ] 00:07:57.140 [2024-09-30 23:25:36.808831] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:57.140 [2024-09-30 23:25:36.854346] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:57.140 [2024-09-30 23:25:36.896782] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:57.140 [2024-09-30 23:25:36.896822] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:57.709 23:25:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:57.709 23:25:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:07:57.709 23:25:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:57.709 23:25:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:07:57.709 23:25:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:57.709 23:25:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.709 BaseBdev1_malloc 00:07:57.709 23:25:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:57.709 23:25:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:07:57.709 23:25:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:57.709 23:25:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.709 true 00:07:57.709 23:25:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:57.709 23:25:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:07:57.709 23:25:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:57.709 23:25:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.709 [2024-09-30 23:25:37.486822] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:07:57.709 [2024-09-30 23:25:37.486915] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:57.709 [2024-09-30 23:25:37.486935] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:07:57.709 [2024-09-30 23:25:37.486944] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:57.709 [2024-09-30 23:25:37.489036] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:57.709 [2024-09-30 23:25:37.489075] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:07:57.709 BaseBdev1 00:07:57.710 23:25:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:57.710 23:25:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:57.710 23:25:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:07:57.710 23:25:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:57.710 23:25:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.710 BaseBdev2_malloc 00:07:57.710 23:25:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:57.710 23:25:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:07:57.710 23:25:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:57.710 23:25:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.710 true 00:07:57.710 23:25:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:57.710 23:25:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:07:57.710 23:25:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:57.710 23:25:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.710 [2024-09-30 23:25:37.540563] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:07:57.710 [2024-09-30 23:25:37.540637] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:57.710 [2024-09-30 23:25:37.540665] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:07:57.710 [2024-09-30 23:25:37.540678] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:57.710 [2024-09-30 23:25:37.543599] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:57.710 [2024-09-30 23:25:37.543650] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:07:57.710 BaseBdev2 00:07:57.710 23:25:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:57.710 23:25:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:07:57.710 23:25:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:57.710 23:25:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.710 [2024-09-30 23:25:37.552591] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:57.710 [2024-09-30 23:25:37.554544] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:57.710 [2024-09-30 23:25:37.554777] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:07:57.710 [2024-09-30 23:25:37.554838] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:07:57.710 [2024-09-30 23:25:37.555154] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:07:57.710 [2024-09-30 23:25:37.555339] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:07:57.710 [2024-09-30 23:25:37.555388] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006980 00:07:57.710 [2024-09-30 23:25:37.555560] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:57.710 23:25:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:57.710 23:25:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:07:57.710 23:25:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:57.710 23:25:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:57.710 23:25:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:57.710 23:25:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:57.710 23:25:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:57.710 23:25:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:57.710 23:25:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:57.710 23:25:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:57.710 23:25:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:57.968 23:25:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:57.968 23:25:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:57.968 23:25:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:57.968 23:25:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.968 23:25:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:57.968 23:25:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:57.968 "name": "raid_bdev1", 00:07:57.968 "uuid": "0532767e-06ea-47db-9d2e-66b2029fd149", 00:07:57.968 "strip_size_kb": 0, 00:07:57.968 "state": "online", 00:07:57.968 "raid_level": "raid1", 00:07:57.968 "superblock": true, 00:07:57.968 "num_base_bdevs": 2, 00:07:57.968 "num_base_bdevs_discovered": 2, 00:07:57.968 "num_base_bdevs_operational": 2, 00:07:57.968 "base_bdevs_list": [ 00:07:57.968 { 00:07:57.969 "name": "BaseBdev1", 00:07:57.969 "uuid": "3c512720-d6ec-539c-8e87-df09bfc07995", 00:07:57.969 "is_configured": true, 00:07:57.969 "data_offset": 2048, 00:07:57.969 "data_size": 63488 00:07:57.969 }, 00:07:57.969 { 00:07:57.969 "name": "BaseBdev2", 00:07:57.969 "uuid": "4f297402-645a-5b37-92a5-c9289d195d17", 00:07:57.969 "is_configured": true, 00:07:57.969 "data_offset": 2048, 00:07:57.969 "data_size": 63488 00:07:57.969 } 00:07:57.969 ] 00:07:57.969 }' 00:07:57.969 23:25:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:57.969 23:25:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.227 23:25:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:07:58.227 23:25:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:07:58.486 [2024-09-30 23:25:38.096013] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:59.425 23:25:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:07:59.425 23:25:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:59.425 23:25:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.425 [2024-09-30 23:25:39.012184] bdev_raid.c:2272:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:07:59.425 [2024-09-30 23:25:39.012351] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:59.425 [2024-09-30 23:25:39.012563] bdev_raid.c:1970:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000005d40 00:07:59.425 23:25:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:59.425 23:25:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:07:59.425 23:25:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:07:59.425 23:25:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:07:59.425 23:25:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=1 00:07:59.425 23:25:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:07:59.425 23:25:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:59.425 23:25:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:59.425 23:25:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:59.425 23:25:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:59.425 23:25:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:59.425 23:25:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:59.425 23:25:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:59.425 23:25:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:59.425 23:25:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:59.425 23:25:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:59.425 23:25:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:59.425 23:25:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:59.425 23:25:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.425 23:25:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:59.425 23:25:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:59.425 "name": "raid_bdev1", 00:07:59.425 "uuid": "0532767e-06ea-47db-9d2e-66b2029fd149", 00:07:59.425 "strip_size_kb": 0, 00:07:59.425 "state": "online", 00:07:59.425 "raid_level": "raid1", 00:07:59.425 "superblock": true, 00:07:59.425 "num_base_bdevs": 2, 00:07:59.425 "num_base_bdevs_discovered": 1, 00:07:59.425 "num_base_bdevs_operational": 1, 00:07:59.425 "base_bdevs_list": [ 00:07:59.425 { 00:07:59.425 "name": null, 00:07:59.425 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:59.425 "is_configured": false, 00:07:59.425 "data_offset": 0, 00:07:59.425 "data_size": 63488 00:07:59.425 }, 00:07:59.425 { 00:07:59.425 "name": "BaseBdev2", 00:07:59.425 "uuid": "4f297402-645a-5b37-92a5-c9289d195d17", 00:07:59.425 "is_configured": true, 00:07:59.425 "data_offset": 2048, 00:07:59.425 "data_size": 63488 00:07:59.425 } 00:07:59.425 ] 00:07:59.425 }' 00:07:59.425 23:25:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:59.425 23:25:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.684 23:25:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:59.684 23:25:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:59.684 23:25:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.684 [2024-09-30 23:25:39.464924] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:59.684 [2024-09-30 23:25:39.465047] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:59.684 [2024-09-30 23:25:39.467519] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:59.684 [2024-09-30 23:25:39.467615] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:59.684 [2024-09-30 23:25:39.467687] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:59.685 [2024-09-30 23:25:39.467761] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name raid_bdev1, state offline 00:07:59.685 { 00:07:59.685 "results": [ 00:07:59.685 { 00:07:59.685 "job": "raid_bdev1", 00:07:59.685 "core_mask": "0x1", 00:07:59.685 "workload": "randrw", 00:07:59.685 "percentage": 50, 00:07:59.685 "status": "finished", 00:07:59.685 "queue_depth": 1, 00:07:59.685 "io_size": 131072, 00:07:59.685 "runtime": 1.369944, 00:07:59.685 "iops": 24225.077813399672, 00:07:59.685 "mibps": 3028.134726674959, 00:07:59.685 "io_failed": 0, 00:07:59.685 "io_timeout": 0, 00:07:59.685 "avg_latency_us": 38.84811991016107, 00:07:59.685 "min_latency_us": 21.016593886462882, 00:07:59.685 "max_latency_us": 1330.7528384279476 00:07:59.685 } 00:07:59.685 ], 00:07:59.685 "core_count": 1 00:07:59.685 } 00:07:59.685 23:25:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:59.685 23:25:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 74961 00:07:59.685 23:25:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 74961 ']' 00:07:59.685 23:25:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 74961 00:07:59.685 23:25:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:07:59.685 23:25:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:59.685 23:25:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 74961 00:07:59.685 23:25:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:59.685 23:25:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:59.685 killing process with pid 74961 00:07:59.685 23:25:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 74961' 00:07:59.685 23:25:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 74961 00:07:59.685 [2024-09-30 23:25:39.516429] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:59.685 23:25:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 74961 00:07:59.685 [2024-09-30 23:25:39.531480] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:59.943 23:25:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:07:59.943 23:25:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.Jlgmsnbv2b 00:07:59.943 23:25:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:07:59.943 23:25:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:07:59.943 23:25:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:07:59.943 23:25:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:59.943 23:25:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:07:59.943 23:25:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:07:59.943 00:07:59.943 real 0m3.234s 00:07:59.943 user 0m4.084s 00:07:59.943 sys 0m0.539s 00:07:59.943 23:25:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:59.943 ************************************ 00:07:59.943 END TEST raid_write_error_test 00:07:59.943 ************************************ 00:07:59.943 23:25:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.204 23:25:39 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:08:00.204 23:25:39 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:08:00.204 23:25:39 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 3 false 00:08:00.204 23:25:39 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:08:00.204 23:25:39 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:00.204 23:25:39 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:00.204 ************************************ 00:08:00.204 START TEST raid_state_function_test 00:08:00.204 ************************************ 00:08:00.204 23:25:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test raid0 3 false 00:08:00.204 23:25:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:08:00.204 23:25:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:08:00.204 23:25:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:08:00.204 23:25:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:00.204 23:25:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:00.204 23:25:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:00.204 23:25:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:00.204 23:25:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:00.204 23:25:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:00.204 23:25:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:00.204 23:25:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:00.204 23:25:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:00.204 23:25:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:08:00.204 23:25:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:00.204 23:25:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:00.204 23:25:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:08:00.204 23:25:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:00.204 23:25:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:00.204 23:25:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:00.204 23:25:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:00.204 23:25:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:00.204 23:25:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:08:00.204 23:25:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:08:00.204 23:25:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:08:00.204 23:25:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:08:00.204 23:25:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:08:00.204 23:25:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=75089 00:08:00.204 23:25:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:00.204 23:25:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 75089' 00:08:00.204 Process raid pid: 75089 00:08:00.204 23:25:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 75089 00:08:00.204 23:25:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 75089 ']' 00:08:00.204 23:25:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:00.204 23:25:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:00.204 23:25:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:00.204 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:00.204 23:25:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:00.204 23:25:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.204 [2024-09-30 23:25:39.949367] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:08:00.204 [2024-09-30 23:25:39.949597] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:00.464 [2024-09-30 23:25:40.114657] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:00.464 [2024-09-30 23:25:40.161761] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:00.464 [2024-09-30 23:25:40.205206] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:00.464 [2024-09-30 23:25:40.205319] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:01.033 23:25:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:01.033 23:25:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:08:01.033 23:25:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:01.033 23:25:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:01.033 23:25:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.033 [2024-09-30 23:25:40.782819] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:01.033 [2024-09-30 23:25:40.782885] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:01.033 [2024-09-30 23:25:40.782908] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:01.033 [2024-09-30 23:25:40.782918] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:01.033 [2024-09-30 23:25:40.782924] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:01.033 [2024-09-30 23:25:40.782935] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:01.033 23:25:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:01.033 23:25:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:01.033 23:25:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:01.033 23:25:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:01.033 23:25:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:01.033 23:25:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:01.033 23:25:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:01.033 23:25:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:01.034 23:25:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:01.034 23:25:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:01.034 23:25:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:01.034 23:25:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:01.034 23:25:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:01.034 23:25:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.034 23:25:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:01.034 23:25:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:01.034 23:25:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:01.034 "name": "Existed_Raid", 00:08:01.034 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:01.034 "strip_size_kb": 64, 00:08:01.034 "state": "configuring", 00:08:01.034 "raid_level": "raid0", 00:08:01.034 "superblock": false, 00:08:01.034 "num_base_bdevs": 3, 00:08:01.034 "num_base_bdevs_discovered": 0, 00:08:01.034 "num_base_bdevs_operational": 3, 00:08:01.034 "base_bdevs_list": [ 00:08:01.034 { 00:08:01.034 "name": "BaseBdev1", 00:08:01.034 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:01.034 "is_configured": false, 00:08:01.034 "data_offset": 0, 00:08:01.034 "data_size": 0 00:08:01.034 }, 00:08:01.034 { 00:08:01.034 "name": "BaseBdev2", 00:08:01.034 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:01.034 "is_configured": false, 00:08:01.034 "data_offset": 0, 00:08:01.034 "data_size": 0 00:08:01.034 }, 00:08:01.034 { 00:08:01.034 "name": "BaseBdev3", 00:08:01.034 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:01.034 "is_configured": false, 00:08:01.034 "data_offset": 0, 00:08:01.034 "data_size": 0 00:08:01.034 } 00:08:01.034 ] 00:08:01.034 }' 00:08:01.034 23:25:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:01.034 23:25:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.604 23:25:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:01.604 23:25:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:01.604 23:25:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.604 [2024-09-30 23:25:41.214239] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:01.604 [2024-09-30 23:25:41.214348] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring 00:08:01.604 23:25:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:01.604 23:25:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:01.604 23:25:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:01.604 23:25:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.604 [2024-09-30 23:25:41.222253] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:01.604 [2024-09-30 23:25:41.222352] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:01.604 [2024-09-30 23:25:41.222378] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:01.604 [2024-09-30 23:25:41.222400] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:01.604 [2024-09-30 23:25:41.222417] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:01.604 [2024-09-30 23:25:41.222437] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:01.604 23:25:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:01.604 23:25:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:01.604 23:25:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:01.604 23:25:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.604 [2024-09-30 23:25:41.238949] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:01.604 BaseBdev1 00:08:01.604 23:25:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:01.604 23:25:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:01.604 23:25:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:08:01.604 23:25:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:01.604 23:25:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:08:01.604 23:25:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:01.604 23:25:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:01.604 23:25:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:01.604 23:25:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:01.604 23:25:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.604 23:25:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:01.604 23:25:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:01.604 23:25:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:01.604 23:25:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.604 [ 00:08:01.604 { 00:08:01.604 "name": "BaseBdev1", 00:08:01.604 "aliases": [ 00:08:01.604 "1a4d35af-3273-45d9-ab5a-f4c33db54f50" 00:08:01.604 ], 00:08:01.604 "product_name": "Malloc disk", 00:08:01.604 "block_size": 512, 00:08:01.604 "num_blocks": 65536, 00:08:01.604 "uuid": "1a4d35af-3273-45d9-ab5a-f4c33db54f50", 00:08:01.604 "assigned_rate_limits": { 00:08:01.604 "rw_ios_per_sec": 0, 00:08:01.604 "rw_mbytes_per_sec": 0, 00:08:01.604 "r_mbytes_per_sec": 0, 00:08:01.604 "w_mbytes_per_sec": 0 00:08:01.604 }, 00:08:01.604 "claimed": true, 00:08:01.604 "claim_type": "exclusive_write", 00:08:01.604 "zoned": false, 00:08:01.604 "supported_io_types": { 00:08:01.604 "read": true, 00:08:01.604 "write": true, 00:08:01.604 "unmap": true, 00:08:01.604 "flush": true, 00:08:01.604 "reset": true, 00:08:01.604 "nvme_admin": false, 00:08:01.604 "nvme_io": false, 00:08:01.604 "nvme_io_md": false, 00:08:01.604 "write_zeroes": true, 00:08:01.604 "zcopy": true, 00:08:01.604 "get_zone_info": false, 00:08:01.604 "zone_management": false, 00:08:01.604 "zone_append": false, 00:08:01.604 "compare": false, 00:08:01.604 "compare_and_write": false, 00:08:01.604 "abort": true, 00:08:01.604 "seek_hole": false, 00:08:01.604 "seek_data": false, 00:08:01.604 "copy": true, 00:08:01.604 "nvme_iov_md": false 00:08:01.604 }, 00:08:01.604 "memory_domains": [ 00:08:01.604 { 00:08:01.604 "dma_device_id": "system", 00:08:01.604 "dma_device_type": 1 00:08:01.604 }, 00:08:01.604 { 00:08:01.604 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:01.604 "dma_device_type": 2 00:08:01.604 } 00:08:01.604 ], 00:08:01.604 "driver_specific": {} 00:08:01.604 } 00:08:01.605 ] 00:08:01.605 23:25:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:01.605 23:25:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:08:01.605 23:25:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:01.605 23:25:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:01.605 23:25:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:01.605 23:25:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:01.605 23:25:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:01.605 23:25:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:01.605 23:25:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:01.605 23:25:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:01.605 23:25:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:01.605 23:25:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:01.605 23:25:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:01.605 23:25:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:01.605 23:25:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:01.605 23:25:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.605 23:25:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:01.605 23:25:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:01.605 "name": "Existed_Raid", 00:08:01.605 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:01.605 "strip_size_kb": 64, 00:08:01.605 "state": "configuring", 00:08:01.605 "raid_level": "raid0", 00:08:01.605 "superblock": false, 00:08:01.605 "num_base_bdevs": 3, 00:08:01.605 "num_base_bdevs_discovered": 1, 00:08:01.605 "num_base_bdevs_operational": 3, 00:08:01.605 "base_bdevs_list": [ 00:08:01.605 { 00:08:01.605 "name": "BaseBdev1", 00:08:01.605 "uuid": "1a4d35af-3273-45d9-ab5a-f4c33db54f50", 00:08:01.605 "is_configured": true, 00:08:01.605 "data_offset": 0, 00:08:01.605 "data_size": 65536 00:08:01.605 }, 00:08:01.605 { 00:08:01.605 "name": "BaseBdev2", 00:08:01.605 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:01.605 "is_configured": false, 00:08:01.605 "data_offset": 0, 00:08:01.605 "data_size": 0 00:08:01.605 }, 00:08:01.605 { 00:08:01.605 "name": "BaseBdev3", 00:08:01.605 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:01.605 "is_configured": false, 00:08:01.605 "data_offset": 0, 00:08:01.605 "data_size": 0 00:08:01.605 } 00:08:01.605 ] 00:08:01.605 }' 00:08:01.605 23:25:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:01.605 23:25:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.176 23:25:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:02.176 23:25:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:02.176 23:25:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.176 [2024-09-30 23:25:41.734329] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:02.176 [2024-09-30 23:25:41.734372] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring 00:08:02.176 23:25:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:02.176 23:25:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:02.176 23:25:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:02.176 23:25:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.176 [2024-09-30 23:25:41.746343] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:02.176 [2024-09-30 23:25:41.748221] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:02.176 [2024-09-30 23:25:41.748261] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:02.176 [2024-09-30 23:25:41.748270] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:02.176 [2024-09-30 23:25:41.748280] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:02.176 23:25:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:02.176 23:25:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:02.176 23:25:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:02.176 23:25:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:02.176 23:25:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:02.176 23:25:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:02.176 23:25:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:02.176 23:25:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:02.176 23:25:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:02.176 23:25:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:02.176 23:25:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:02.176 23:25:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:02.176 23:25:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:02.176 23:25:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:02.176 23:25:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:02.176 23:25:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:02.176 23:25:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.176 23:25:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:02.176 23:25:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:02.176 "name": "Existed_Raid", 00:08:02.176 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:02.176 "strip_size_kb": 64, 00:08:02.176 "state": "configuring", 00:08:02.176 "raid_level": "raid0", 00:08:02.176 "superblock": false, 00:08:02.176 "num_base_bdevs": 3, 00:08:02.176 "num_base_bdevs_discovered": 1, 00:08:02.176 "num_base_bdevs_operational": 3, 00:08:02.176 "base_bdevs_list": [ 00:08:02.176 { 00:08:02.176 "name": "BaseBdev1", 00:08:02.176 "uuid": "1a4d35af-3273-45d9-ab5a-f4c33db54f50", 00:08:02.176 "is_configured": true, 00:08:02.176 "data_offset": 0, 00:08:02.176 "data_size": 65536 00:08:02.176 }, 00:08:02.176 { 00:08:02.176 "name": "BaseBdev2", 00:08:02.176 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:02.176 "is_configured": false, 00:08:02.176 "data_offset": 0, 00:08:02.176 "data_size": 0 00:08:02.176 }, 00:08:02.176 { 00:08:02.176 "name": "BaseBdev3", 00:08:02.176 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:02.176 "is_configured": false, 00:08:02.176 "data_offset": 0, 00:08:02.176 "data_size": 0 00:08:02.176 } 00:08:02.176 ] 00:08:02.176 }' 00:08:02.176 23:25:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:02.176 23:25:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.436 23:25:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:02.436 23:25:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:02.436 23:25:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.436 [2024-09-30 23:25:42.211952] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:02.436 BaseBdev2 00:08:02.436 23:25:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:02.436 23:25:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:02.436 23:25:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:08:02.436 23:25:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:02.436 23:25:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:08:02.436 23:25:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:02.436 23:25:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:02.436 23:25:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:02.436 23:25:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:02.436 23:25:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.436 23:25:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:02.436 23:25:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:02.436 23:25:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:02.436 23:25:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.436 [ 00:08:02.436 { 00:08:02.436 "name": "BaseBdev2", 00:08:02.436 "aliases": [ 00:08:02.436 "f3658dd3-91fd-4302-a9d9-a0e23c2d30cd" 00:08:02.436 ], 00:08:02.436 "product_name": "Malloc disk", 00:08:02.436 "block_size": 512, 00:08:02.436 "num_blocks": 65536, 00:08:02.436 "uuid": "f3658dd3-91fd-4302-a9d9-a0e23c2d30cd", 00:08:02.436 "assigned_rate_limits": { 00:08:02.436 "rw_ios_per_sec": 0, 00:08:02.436 "rw_mbytes_per_sec": 0, 00:08:02.436 "r_mbytes_per_sec": 0, 00:08:02.436 "w_mbytes_per_sec": 0 00:08:02.436 }, 00:08:02.436 "claimed": true, 00:08:02.436 "claim_type": "exclusive_write", 00:08:02.436 "zoned": false, 00:08:02.436 "supported_io_types": { 00:08:02.436 "read": true, 00:08:02.436 "write": true, 00:08:02.436 "unmap": true, 00:08:02.436 "flush": true, 00:08:02.436 "reset": true, 00:08:02.436 "nvme_admin": false, 00:08:02.436 "nvme_io": false, 00:08:02.436 "nvme_io_md": false, 00:08:02.436 "write_zeroes": true, 00:08:02.436 "zcopy": true, 00:08:02.436 "get_zone_info": false, 00:08:02.436 "zone_management": false, 00:08:02.436 "zone_append": false, 00:08:02.436 "compare": false, 00:08:02.436 "compare_and_write": false, 00:08:02.436 "abort": true, 00:08:02.436 "seek_hole": false, 00:08:02.436 "seek_data": false, 00:08:02.436 "copy": true, 00:08:02.436 "nvme_iov_md": false 00:08:02.436 }, 00:08:02.436 "memory_domains": [ 00:08:02.436 { 00:08:02.436 "dma_device_id": "system", 00:08:02.436 "dma_device_type": 1 00:08:02.436 }, 00:08:02.436 { 00:08:02.436 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:02.436 "dma_device_type": 2 00:08:02.436 } 00:08:02.436 ], 00:08:02.436 "driver_specific": {} 00:08:02.436 } 00:08:02.436 ] 00:08:02.436 23:25:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:02.436 23:25:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:08:02.436 23:25:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:02.436 23:25:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:02.436 23:25:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:02.436 23:25:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:02.436 23:25:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:02.436 23:25:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:02.436 23:25:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:02.436 23:25:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:02.436 23:25:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:02.436 23:25:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:02.436 23:25:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:02.436 23:25:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:02.436 23:25:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:02.436 23:25:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:02.436 23:25:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.436 23:25:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:02.436 23:25:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:02.695 23:25:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:02.695 "name": "Existed_Raid", 00:08:02.695 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:02.695 "strip_size_kb": 64, 00:08:02.695 "state": "configuring", 00:08:02.695 "raid_level": "raid0", 00:08:02.695 "superblock": false, 00:08:02.695 "num_base_bdevs": 3, 00:08:02.695 "num_base_bdevs_discovered": 2, 00:08:02.695 "num_base_bdevs_operational": 3, 00:08:02.695 "base_bdevs_list": [ 00:08:02.695 { 00:08:02.695 "name": "BaseBdev1", 00:08:02.695 "uuid": "1a4d35af-3273-45d9-ab5a-f4c33db54f50", 00:08:02.695 "is_configured": true, 00:08:02.695 "data_offset": 0, 00:08:02.695 "data_size": 65536 00:08:02.695 }, 00:08:02.695 { 00:08:02.695 "name": "BaseBdev2", 00:08:02.695 "uuid": "f3658dd3-91fd-4302-a9d9-a0e23c2d30cd", 00:08:02.695 "is_configured": true, 00:08:02.695 "data_offset": 0, 00:08:02.695 "data_size": 65536 00:08:02.695 }, 00:08:02.695 { 00:08:02.695 "name": "BaseBdev3", 00:08:02.695 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:02.695 "is_configured": false, 00:08:02.695 "data_offset": 0, 00:08:02.695 "data_size": 0 00:08:02.695 } 00:08:02.695 ] 00:08:02.695 }' 00:08:02.695 23:25:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:02.695 23:25:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.955 23:25:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:02.955 23:25:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:02.955 23:25:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.955 [2024-09-30 23:25:42.657974] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:02.955 [2024-09-30 23:25:42.658014] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:08:02.955 [2024-09-30 23:25:42.658026] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:08:02.955 [2024-09-30 23:25:42.658314] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:08:02.955 [2024-09-30 23:25:42.658439] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:08:02.955 [2024-09-30 23:25:42.658448] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980 00:08:02.955 [2024-09-30 23:25:42.658667] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:02.955 BaseBdev3 00:08:02.955 23:25:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:02.955 23:25:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:08:02.955 23:25:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:08:02.955 23:25:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:02.955 23:25:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:08:02.955 23:25:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:02.955 23:25:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:02.955 23:25:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:02.955 23:25:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:02.955 23:25:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.955 23:25:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:02.955 23:25:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:02.955 23:25:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:02.955 23:25:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.955 [ 00:08:02.955 { 00:08:02.955 "name": "BaseBdev3", 00:08:02.955 "aliases": [ 00:08:02.955 "1a39852f-d0f2-410d-83b6-65d7301e9660" 00:08:02.955 ], 00:08:02.955 "product_name": "Malloc disk", 00:08:02.955 "block_size": 512, 00:08:02.955 "num_blocks": 65536, 00:08:02.955 "uuid": "1a39852f-d0f2-410d-83b6-65d7301e9660", 00:08:02.955 "assigned_rate_limits": { 00:08:02.955 "rw_ios_per_sec": 0, 00:08:02.955 "rw_mbytes_per_sec": 0, 00:08:02.955 "r_mbytes_per_sec": 0, 00:08:02.955 "w_mbytes_per_sec": 0 00:08:02.955 }, 00:08:02.955 "claimed": true, 00:08:02.955 "claim_type": "exclusive_write", 00:08:02.955 "zoned": false, 00:08:02.955 "supported_io_types": { 00:08:02.955 "read": true, 00:08:02.955 "write": true, 00:08:02.955 "unmap": true, 00:08:02.955 "flush": true, 00:08:02.955 "reset": true, 00:08:02.955 "nvme_admin": false, 00:08:02.955 "nvme_io": false, 00:08:02.955 "nvme_io_md": false, 00:08:02.955 "write_zeroes": true, 00:08:02.955 "zcopy": true, 00:08:02.955 "get_zone_info": false, 00:08:02.955 "zone_management": false, 00:08:02.955 "zone_append": false, 00:08:02.955 "compare": false, 00:08:02.955 "compare_and_write": false, 00:08:02.955 "abort": true, 00:08:02.955 "seek_hole": false, 00:08:02.955 "seek_data": false, 00:08:02.955 "copy": true, 00:08:02.955 "nvme_iov_md": false 00:08:02.955 }, 00:08:02.955 "memory_domains": [ 00:08:02.955 { 00:08:02.955 "dma_device_id": "system", 00:08:02.955 "dma_device_type": 1 00:08:02.955 }, 00:08:02.955 { 00:08:02.955 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:02.955 "dma_device_type": 2 00:08:02.955 } 00:08:02.955 ], 00:08:02.955 "driver_specific": {} 00:08:02.955 } 00:08:02.955 ] 00:08:02.955 23:25:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:02.955 23:25:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:08:02.955 23:25:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:02.955 23:25:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:02.955 23:25:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:08:02.955 23:25:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:02.955 23:25:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:02.955 23:25:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:02.955 23:25:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:02.955 23:25:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:02.955 23:25:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:02.955 23:25:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:02.955 23:25:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:02.955 23:25:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:02.955 23:25:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:02.955 23:25:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:02.955 23:25:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:02.955 23:25:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.955 23:25:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:02.955 23:25:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:02.955 "name": "Existed_Raid", 00:08:02.955 "uuid": "de7022df-68fd-4db5-bad0-2e06a918fcf5", 00:08:02.955 "strip_size_kb": 64, 00:08:02.955 "state": "online", 00:08:02.955 "raid_level": "raid0", 00:08:02.955 "superblock": false, 00:08:02.955 "num_base_bdevs": 3, 00:08:02.955 "num_base_bdevs_discovered": 3, 00:08:02.955 "num_base_bdevs_operational": 3, 00:08:02.956 "base_bdevs_list": [ 00:08:02.956 { 00:08:02.956 "name": "BaseBdev1", 00:08:02.956 "uuid": "1a4d35af-3273-45d9-ab5a-f4c33db54f50", 00:08:02.956 "is_configured": true, 00:08:02.956 "data_offset": 0, 00:08:02.956 "data_size": 65536 00:08:02.956 }, 00:08:02.956 { 00:08:02.956 "name": "BaseBdev2", 00:08:02.956 "uuid": "f3658dd3-91fd-4302-a9d9-a0e23c2d30cd", 00:08:02.956 "is_configured": true, 00:08:02.956 "data_offset": 0, 00:08:02.956 "data_size": 65536 00:08:02.956 }, 00:08:02.956 { 00:08:02.956 "name": "BaseBdev3", 00:08:02.956 "uuid": "1a39852f-d0f2-410d-83b6-65d7301e9660", 00:08:02.956 "is_configured": true, 00:08:02.956 "data_offset": 0, 00:08:02.956 "data_size": 65536 00:08:02.956 } 00:08:02.956 ] 00:08:02.956 }' 00:08:02.956 23:25:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:02.956 23:25:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.215 23:25:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:03.215 23:25:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:03.215 23:25:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:03.215 23:25:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:03.215 23:25:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:03.215 23:25:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:03.215 23:25:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:03.215 23:25:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:03.215 23:25:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:03.215 23:25:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.215 [2024-09-30 23:25:43.017631] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:03.215 23:25:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:03.215 23:25:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:03.215 "name": "Existed_Raid", 00:08:03.215 "aliases": [ 00:08:03.215 "de7022df-68fd-4db5-bad0-2e06a918fcf5" 00:08:03.215 ], 00:08:03.215 "product_name": "Raid Volume", 00:08:03.215 "block_size": 512, 00:08:03.215 "num_blocks": 196608, 00:08:03.215 "uuid": "de7022df-68fd-4db5-bad0-2e06a918fcf5", 00:08:03.215 "assigned_rate_limits": { 00:08:03.215 "rw_ios_per_sec": 0, 00:08:03.215 "rw_mbytes_per_sec": 0, 00:08:03.215 "r_mbytes_per_sec": 0, 00:08:03.215 "w_mbytes_per_sec": 0 00:08:03.215 }, 00:08:03.215 "claimed": false, 00:08:03.215 "zoned": false, 00:08:03.215 "supported_io_types": { 00:08:03.215 "read": true, 00:08:03.215 "write": true, 00:08:03.215 "unmap": true, 00:08:03.215 "flush": true, 00:08:03.215 "reset": true, 00:08:03.215 "nvme_admin": false, 00:08:03.215 "nvme_io": false, 00:08:03.215 "nvme_io_md": false, 00:08:03.215 "write_zeroes": true, 00:08:03.215 "zcopy": false, 00:08:03.215 "get_zone_info": false, 00:08:03.215 "zone_management": false, 00:08:03.215 "zone_append": false, 00:08:03.215 "compare": false, 00:08:03.215 "compare_and_write": false, 00:08:03.215 "abort": false, 00:08:03.215 "seek_hole": false, 00:08:03.215 "seek_data": false, 00:08:03.215 "copy": false, 00:08:03.215 "nvme_iov_md": false 00:08:03.215 }, 00:08:03.215 "memory_domains": [ 00:08:03.215 { 00:08:03.215 "dma_device_id": "system", 00:08:03.215 "dma_device_type": 1 00:08:03.215 }, 00:08:03.215 { 00:08:03.215 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:03.215 "dma_device_type": 2 00:08:03.215 }, 00:08:03.215 { 00:08:03.215 "dma_device_id": "system", 00:08:03.215 "dma_device_type": 1 00:08:03.215 }, 00:08:03.215 { 00:08:03.215 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:03.215 "dma_device_type": 2 00:08:03.215 }, 00:08:03.215 { 00:08:03.215 "dma_device_id": "system", 00:08:03.215 "dma_device_type": 1 00:08:03.215 }, 00:08:03.215 { 00:08:03.215 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:03.215 "dma_device_type": 2 00:08:03.215 } 00:08:03.215 ], 00:08:03.215 "driver_specific": { 00:08:03.215 "raid": { 00:08:03.215 "uuid": "de7022df-68fd-4db5-bad0-2e06a918fcf5", 00:08:03.215 "strip_size_kb": 64, 00:08:03.215 "state": "online", 00:08:03.215 "raid_level": "raid0", 00:08:03.215 "superblock": false, 00:08:03.215 "num_base_bdevs": 3, 00:08:03.215 "num_base_bdevs_discovered": 3, 00:08:03.215 "num_base_bdevs_operational": 3, 00:08:03.215 "base_bdevs_list": [ 00:08:03.215 { 00:08:03.215 "name": "BaseBdev1", 00:08:03.215 "uuid": "1a4d35af-3273-45d9-ab5a-f4c33db54f50", 00:08:03.215 "is_configured": true, 00:08:03.215 "data_offset": 0, 00:08:03.215 "data_size": 65536 00:08:03.215 }, 00:08:03.215 { 00:08:03.215 "name": "BaseBdev2", 00:08:03.215 "uuid": "f3658dd3-91fd-4302-a9d9-a0e23c2d30cd", 00:08:03.215 "is_configured": true, 00:08:03.215 "data_offset": 0, 00:08:03.215 "data_size": 65536 00:08:03.215 }, 00:08:03.215 { 00:08:03.215 "name": "BaseBdev3", 00:08:03.215 "uuid": "1a39852f-d0f2-410d-83b6-65d7301e9660", 00:08:03.215 "is_configured": true, 00:08:03.215 "data_offset": 0, 00:08:03.215 "data_size": 65536 00:08:03.215 } 00:08:03.215 ] 00:08:03.215 } 00:08:03.215 } 00:08:03.215 }' 00:08:03.215 23:25:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:03.475 23:25:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:03.475 BaseBdev2 00:08:03.475 BaseBdev3' 00:08:03.475 23:25:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:03.475 23:25:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:03.475 23:25:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:03.475 23:25:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:03.475 23:25:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:03.475 23:25:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.475 23:25:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:03.475 23:25:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:03.475 23:25:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:03.475 23:25:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:03.475 23:25:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:03.475 23:25:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:03.475 23:25:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:03.475 23:25:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:03.475 23:25:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.475 23:25:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:03.475 23:25:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:03.475 23:25:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:03.475 23:25:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:03.475 23:25:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:03.475 23:25:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:03.475 23:25:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.475 23:25:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:03.475 23:25:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:03.475 23:25:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:03.475 23:25:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:03.475 23:25:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:03.475 23:25:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:03.475 23:25:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.475 [2024-09-30 23:25:43.273015] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:03.475 [2024-09-30 23:25:43.273043] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:03.475 [2024-09-30 23:25:43.273105] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:03.475 23:25:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:03.475 23:25:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:03.475 23:25:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:08:03.475 23:25:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:03.475 23:25:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:03.475 23:25:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:08:03.475 23:25:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:08:03.475 23:25:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:03.475 23:25:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:08:03.475 23:25:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:03.475 23:25:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:03.475 23:25:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:03.475 23:25:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:03.475 23:25:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:03.475 23:25:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:03.475 23:25:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:03.475 23:25:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:03.476 23:25:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:03.476 23:25:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.476 23:25:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:03.476 23:25:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:03.735 23:25:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:03.735 "name": "Existed_Raid", 00:08:03.735 "uuid": "de7022df-68fd-4db5-bad0-2e06a918fcf5", 00:08:03.735 "strip_size_kb": 64, 00:08:03.735 "state": "offline", 00:08:03.735 "raid_level": "raid0", 00:08:03.735 "superblock": false, 00:08:03.735 "num_base_bdevs": 3, 00:08:03.735 "num_base_bdevs_discovered": 2, 00:08:03.735 "num_base_bdevs_operational": 2, 00:08:03.735 "base_bdevs_list": [ 00:08:03.735 { 00:08:03.735 "name": null, 00:08:03.735 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:03.735 "is_configured": false, 00:08:03.735 "data_offset": 0, 00:08:03.735 "data_size": 65536 00:08:03.735 }, 00:08:03.735 { 00:08:03.735 "name": "BaseBdev2", 00:08:03.735 "uuid": "f3658dd3-91fd-4302-a9d9-a0e23c2d30cd", 00:08:03.735 "is_configured": true, 00:08:03.735 "data_offset": 0, 00:08:03.735 "data_size": 65536 00:08:03.735 }, 00:08:03.735 { 00:08:03.735 "name": "BaseBdev3", 00:08:03.735 "uuid": "1a39852f-d0f2-410d-83b6-65d7301e9660", 00:08:03.735 "is_configured": true, 00:08:03.735 "data_offset": 0, 00:08:03.735 "data_size": 65536 00:08:03.735 } 00:08:03.735 ] 00:08:03.735 }' 00:08:03.735 23:25:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:03.735 23:25:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.994 23:25:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:03.994 23:25:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:03.994 23:25:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:03.994 23:25:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:03.994 23:25:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.994 23:25:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:03.994 23:25:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:03.994 23:25:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:03.994 23:25:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:03.994 23:25:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:03.994 23:25:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:03.994 23:25:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.994 [2024-09-30 23:25:43.751472] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:03.994 23:25:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:03.994 23:25:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:03.994 23:25:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:03.994 23:25:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:03.994 23:25:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:03.994 23:25:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:03.994 23:25:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.994 23:25:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:03.994 23:25:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:03.995 23:25:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:03.995 23:25:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:08:03.995 23:25:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:03.995 23:25:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.995 [2024-09-30 23:25:43.822795] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:03.995 [2024-09-30 23:25:43.822844] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline 00:08:03.995 23:25:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:03.995 23:25:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:03.995 23:25:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:03.995 23:25:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:03.995 23:25:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:03.995 23:25:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.995 23:25:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:04.255 23:25:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:04.255 23:25:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:04.255 23:25:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:04.255 23:25:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:08:04.255 23:25:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:08:04.255 23:25:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:04.255 23:25:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:04.255 23:25:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:04.255 23:25:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.255 BaseBdev2 00:08:04.255 23:25:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:04.255 23:25:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:08:04.255 23:25:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:08:04.255 23:25:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:04.255 23:25:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:08:04.255 23:25:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:04.255 23:25:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:04.255 23:25:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:04.255 23:25:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:04.255 23:25:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.255 23:25:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:04.255 23:25:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:04.255 23:25:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:04.255 23:25:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.255 [ 00:08:04.255 { 00:08:04.255 "name": "BaseBdev2", 00:08:04.255 "aliases": [ 00:08:04.255 "3fd34890-5075-46ed-931f-c7ab2a7ca007" 00:08:04.255 ], 00:08:04.255 "product_name": "Malloc disk", 00:08:04.255 "block_size": 512, 00:08:04.255 "num_blocks": 65536, 00:08:04.255 "uuid": "3fd34890-5075-46ed-931f-c7ab2a7ca007", 00:08:04.255 "assigned_rate_limits": { 00:08:04.255 "rw_ios_per_sec": 0, 00:08:04.255 "rw_mbytes_per_sec": 0, 00:08:04.255 "r_mbytes_per_sec": 0, 00:08:04.255 "w_mbytes_per_sec": 0 00:08:04.255 }, 00:08:04.255 "claimed": false, 00:08:04.255 "zoned": false, 00:08:04.255 "supported_io_types": { 00:08:04.255 "read": true, 00:08:04.255 "write": true, 00:08:04.255 "unmap": true, 00:08:04.255 "flush": true, 00:08:04.255 "reset": true, 00:08:04.255 "nvme_admin": false, 00:08:04.255 "nvme_io": false, 00:08:04.255 "nvme_io_md": false, 00:08:04.255 "write_zeroes": true, 00:08:04.255 "zcopy": true, 00:08:04.255 "get_zone_info": false, 00:08:04.255 "zone_management": false, 00:08:04.255 "zone_append": false, 00:08:04.255 "compare": false, 00:08:04.255 "compare_and_write": false, 00:08:04.255 "abort": true, 00:08:04.255 "seek_hole": false, 00:08:04.255 "seek_data": false, 00:08:04.255 "copy": true, 00:08:04.255 "nvme_iov_md": false 00:08:04.255 }, 00:08:04.255 "memory_domains": [ 00:08:04.255 { 00:08:04.255 "dma_device_id": "system", 00:08:04.255 "dma_device_type": 1 00:08:04.255 }, 00:08:04.255 { 00:08:04.255 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:04.255 "dma_device_type": 2 00:08:04.255 } 00:08:04.255 ], 00:08:04.255 "driver_specific": {} 00:08:04.255 } 00:08:04.255 ] 00:08:04.255 23:25:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:04.255 23:25:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:08:04.255 23:25:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:04.255 23:25:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:04.255 23:25:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:04.255 23:25:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:04.255 23:25:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.255 BaseBdev3 00:08:04.255 23:25:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:04.255 23:25:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:08:04.255 23:25:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:08:04.255 23:25:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:04.255 23:25:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:08:04.255 23:25:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:04.255 23:25:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:04.255 23:25:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:04.255 23:25:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:04.255 23:25:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.255 23:25:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:04.255 23:25:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:04.255 23:25:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:04.255 23:25:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.255 [ 00:08:04.255 { 00:08:04.255 "name": "BaseBdev3", 00:08:04.255 "aliases": [ 00:08:04.255 "3321eee3-29fa-4304-bc46-9e383a698c8b" 00:08:04.255 ], 00:08:04.255 "product_name": "Malloc disk", 00:08:04.255 "block_size": 512, 00:08:04.255 "num_blocks": 65536, 00:08:04.255 "uuid": "3321eee3-29fa-4304-bc46-9e383a698c8b", 00:08:04.255 "assigned_rate_limits": { 00:08:04.255 "rw_ios_per_sec": 0, 00:08:04.255 "rw_mbytes_per_sec": 0, 00:08:04.255 "r_mbytes_per_sec": 0, 00:08:04.255 "w_mbytes_per_sec": 0 00:08:04.255 }, 00:08:04.255 "claimed": false, 00:08:04.255 "zoned": false, 00:08:04.255 "supported_io_types": { 00:08:04.255 "read": true, 00:08:04.255 "write": true, 00:08:04.255 "unmap": true, 00:08:04.255 "flush": true, 00:08:04.255 "reset": true, 00:08:04.255 "nvme_admin": false, 00:08:04.255 "nvme_io": false, 00:08:04.255 "nvme_io_md": false, 00:08:04.255 "write_zeroes": true, 00:08:04.255 "zcopy": true, 00:08:04.255 "get_zone_info": false, 00:08:04.255 "zone_management": false, 00:08:04.255 "zone_append": false, 00:08:04.255 "compare": false, 00:08:04.255 "compare_and_write": false, 00:08:04.255 "abort": true, 00:08:04.255 "seek_hole": false, 00:08:04.255 "seek_data": false, 00:08:04.255 "copy": true, 00:08:04.255 "nvme_iov_md": false 00:08:04.255 }, 00:08:04.255 "memory_domains": [ 00:08:04.255 { 00:08:04.255 "dma_device_id": "system", 00:08:04.255 "dma_device_type": 1 00:08:04.255 }, 00:08:04.255 { 00:08:04.255 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:04.255 "dma_device_type": 2 00:08:04.255 } 00:08:04.255 ], 00:08:04.255 "driver_specific": {} 00:08:04.255 } 00:08:04.255 ] 00:08:04.255 23:25:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:04.255 23:25:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:08:04.255 23:25:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:04.255 23:25:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:04.255 23:25:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:04.256 23:25:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:04.256 23:25:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.256 [2024-09-30 23:25:43.997191] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:04.256 [2024-09-30 23:25:43.997316] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:04.256 [2024-09-30 23:25:43.997358] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:04.256 [2024-09-30 23:25:43.999120] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:04.256 23:25:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:04.256 23:25:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:04.256 23:25:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:04.256 23:25:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:04.256 23:25:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:04.256 23:25:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:04.256 23:25:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:04.256 23:25:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:04.256 23:25:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:04.256 23:25:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:04.256 23:25:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:04.256 23:25:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:04.256 23:25:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:04.256 23:25:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:04.256 23:25:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.256 23:25:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:04.256 23:25:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:04.256 "name": "Existed_Raid", 00:08:04.256 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:04.256 "strip_size_kb": 64, 00:08:04.256 "state": "configuring", 00:08:04.256 "raid_level": "raid0", 00:08:04.256 "superblock": false, 00:08:04.256 "num_base_bdevs": 3, 00:08:04.256 "num_base_bdevs_discovered": 2, 00:08:04.256 "num_base_bdevs_operational": 3, 00:08:04.256 "base_bdevs_list": [ 00:08:04.256 { 00:08:04.256 "name": "BaseBdev1", 00:08:04.256 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:04.256 "is_configured": false, 00:08:04.256 "data_offset": 0, 00:08:04.256 "data_size": 0 00:08:04.256 }, 00:08:04.256 { 00:08:04.256 "name": "BaseBdev2", 00:08:04.256 "uuid": "3fd34890-5075-46ed-931f-c7ab2a7ca007", 00:08:04.256 "is_configured": true, 00:08:04.256 "data_offset": 0, 00:08:04.256 "data_size": 65536 00:08:04.256 }, 00:08:04.256 { 00:08:04.256 "name": "BaseBdev3", 00:08:04.256 "uuid": "3321eee3-29fa-4304-bc46-9e383a698c8b", 00:08:04.256 "is_configured": true, 00:08:04.256 "data_offset": 0, 00:08:04.256 "data_size": 65536 00:08:04.256 } 00:08:04.256 ] 00:08:04.256 }' 00:08:04.256 23:25:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:04.256 23:25:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.822 23:25:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:08:04.822 23:25:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:04.822 23:25:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.822 [2024-09-30 23:25:44.376552] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:04.822 23:25:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:04.822 23:25:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:04.822 23:25:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:04.822 23:25:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:04.822 23:25:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:04.822 23:25:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:04.822 23:25:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:04.822 23:25:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:04.822 23:25:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:04.822 23:25:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:04.822 23:25:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:04.822 23:25:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:04.822 23:25:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:04.822 23:25:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:04.822 23:25:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.822 23:25:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:04.822 23:25:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:04.822 "name": "Existed_Raid", 00:08:04.822 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:04.822 "strip_size_kb": 64, 00:08:04.822 "state": "configuring", 00:08:04.822 "raid_level": "raid0", 00:08:04.822 "superblock": false, 00:08:04.822 "num_base_bdevs": 3, 00:08:04.822 "num_base_bdevs_discovered": 1, 00:08:04.822 "num_base_bdevs_operational": 3, 00:08:04.822 "base_bdevs_list": [ 00:08:04.822 { 00:08:04.822 "name": "BaseBdev1", 00:08:04.822 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:04.822 "is_configured": false, 00:08:04.822 "data_offset": 0, 00:08:04.822 "data_size": 0 00:08:04.822 }, 00:08:04.822 { 00:08:04.822 "name": null, 00:08:04.822 "uuid": "3fd34890-5075-46ed-931f-c7ab2a7ca007", 00:08:04.822 "is_configured": false, 00:08:04.822 "data_offset": 0, 00:08:04.822 "data_size": 65536 00:08:04.822 }, 00:08:04.822 { 00:08:04.822 "name": "BaseBdev3", 00:08:04.822 "uuid": "3321eee3-29fa-4304-bc46-9e383a698c8b", 00:08:04.822 "is_configured": true, 00:08:04.822 "data_offset": 0, 00:08:04.822 "data_size": 65536 00:08:04.822 } 00:08:04.822 ] 00:08:04.822 }' 00:08:04.822 23:25:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:04.822 23:25:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.081 23:25:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:05.081 23:25:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:05.081 23:25:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:05.081 23:25:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.081 23:25:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:05.081 23:25:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:08:05.081 23:25:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:05.081 23:25:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:05.081 23:25:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.081 [2024-09-30 23:25:44.846672] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:05.081 BaseBdev1 00:08:05.081 23:25:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:05.081 23:25:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:08:05.081 23:25:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:08:05.081 23:25:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:05.081 23:25:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:08:05.081 23:25:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:05.081 23:25:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:05.081 23:25:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:05.081 23:25:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:05.081 23:25:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.081 23:25:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:05.081 23:25:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:05.081 23:25:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:05.081 23:25:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.081 [ 00:08:05.081 { 00:08:05.081 "name": "BaseBdev1", 00:08:05.081 "aliases": [ 00:08:05.081 "094ffeb9-c300-4fc1-9c45-5ab99e0aebd8" 00:08:05.081 ], 00:08:05.081 "product_name": "Malloc disk", 00:08:05.081 "block_size": 512, 00:08:05.081 "num_blocks": 65536, 00:08:05.081 "uuid": "094ffeb9-c300-4fc1-9c45-5ab99e0aebd8", 00:08:05.081 "assigned_rate_limits": { 00:08:05.081 "rw_ios_per_sec": 0, 00:08:05.081 "rw_mbytes_per_sec": 0, 00:08:05.081 "r_mbytes_per_sec": 0, 00:08:05.081 "w_mbytes_per_sec": 0 00:08:05.081 }, 00:08:05.081 "claimed": true, 00:08:05.081 "claim_type": "exclusive_write", 00:08:05.081 "zoned": false, 00:08:05.081 "supported_io_types": { 00:08:05.081 "read": true, 00:08:05.081 "write": true, 00:08:05.081 "unmap": true, 00:08:05.081 "flush": true, 00:08:05.081 "reset": true, 00:08:05.081 "nvme_admin": false, 00:08:05.081 "nvme_io": false, 00:08:05.081 "nvme_io_md": false, 00:08:05.081 "write_zeroes": true, 00:08:05.081 "zcopy": true, 00:08:05.081 "get_zone_info": false, 00:08:05.081 "zone_management": false, 00:08:05.081 "zone_append": false, 00:08:05.081 "compare": false, 00:08:05.081 "compare_and_write": false, 00:08:05.081 "abort": true, 00:08:05.081 "seek_hole": false, 00:08:05.081 "seek_data": false, 00:08:05.081 "copy": true, 00:08:05.081 "nvme_iov_md": false 00:08:05.081 }, 00:08:05.081 "memory_domains": [ 00:08:05.081 { 00:08:05.081 "dma_device_id": "system", 00:08:05.081 "dma_device_type": 1 00:08:05.081 }, 00:08:05.081 { 00:08:05.081 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:05.081 "dma_device_type": 2 00:08:05.081 } 00:08:05.081 ], 00:08:05.081 "driver_specific": {} 00:08:05.081 } 00:08:05.081 ] 00:08:05.081 23:25:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:05.081 23:25:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:08:05.081 23:25:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:05.081 23:25:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:05.081 23:25:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:05.081 23:25:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:05.081 23:25:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:05.081 23:25:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:05.081 23:25:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:05.081 23:25:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:05.081 23:25:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:05.081 23:25:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:05.081 23:25:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:05.081 23:25:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:05.081 23:25:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:05.081 23:25:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.081 23:25:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:05.081 23:25:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:05.081 "name": "Existed_Raid", 00:08:05.081 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:05.081 "strip_size_kb": 64, 00:08:05.081 "state": "configuring", 00:08:05.081 "raid_level": "raid0", 00:08:05.081 "superblock": false, 00:08:05.081 "num_base_bdevs": 3, 00:08:05.081 "num_base_bdevs_discovered": 2, 00:08:05.081 "num_base_bdevs_operational": 3, 00:08:05.081 "base_bdevs_list": [ 00:08:05.081 { 00:08:05.081 "name": "BaseBdev1", 00:08:05.081 "uuid": "094ffeb9-c300-4fc1-9c45-5ab99e0aebd8", 00:08:05.081 "is_configured": true, 00:08:05.081 "data_offset": 0, 00:08:05.081 "data_size": 65536 00:08:05.081 }, 00:08:05.081 { 00:08:05.081 "name": null, 00:08:05.081 "uuid": "3fd34890-5075-46ed-931f-c7ab2a7ca007", 00:08:05.081 "is_configured": false, 00:08:05.081 "data_offset": 0, 00:08:05.081 "data_size": 65536 00:08:05.081 }, 00:08:05.081 { 00:08:05.081 "name": "BaseBdev3", 00:08:05.081 "uuid": "3321eee3-29fa-4304-bc46-9e383a698c8b", 00:08:05.081 "is_configured": true, 00:08:05.081 "data_offset": 0, 00:08:05.081 "data_size": 65536 00:08:05.081 } 00:08:05.081 ] 00:08:05.081 }' 00:08:05.081 23:25:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:05.081 23:25:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.650 23:25:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:05.650 23:25:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:05.650 23:25:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.650 23:25:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:05.651 23:25:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:05.651 23:25:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:08:05.651 23:25:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:08:05.651 23:25:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:05.651 23:25:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.651 [2024-09-30 23:25:45.365812] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:05.651 23:25:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:05.651 23:25:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:05.651 23:25:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:05.651 23:25:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:05.651 23:25:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:05.651 23:25:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:05.651 23:25:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:05.651 23:25:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:05.651 23:25:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:05.651 23:25:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:05.651 23:25:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:05.651 23:25:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:05.651 23:25:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:05.651 23:25:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:05.651 23:25:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.651 23:25:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:05.651 23:25:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:05.651 "name": "Existed_Raid", 00:08:05.651 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:05.651 "strip_size_kb": 64, 00:08:05.651 "state": "configuring", 00:08:05.651 "raid_level": "raid0", 00:08:05.651 "superblock": false, 00:08:05.651 "num_base_bdevs": 3, 00:08:05.651 "num_base_bdevs_discovered": 1, 00:08:05.651 "num_base_bdevs_operational": 3, 00:08:05.651 "base_bdevs_list": [ 00:08:05.651 { 00:08:05.651 "name": "BaseBdev1", 00:08:05.651 "uuid": "094ffeb9-c300-4fc1-9c45-5ab99e0aebd8", 00:08:05.651 "is_configured": true, 00:08:05.651 "data_offset": 0, 00:08:05.651 "data_size": 65536 00:08:05.651 }, 00:08:05.651 { 00:08:05.651 "name": null, 00:08:05.651 "uuid": "3fd34890-5075-46ed-931f-c7ab2a7ca007", 00:08:05.651 "is_configured": false, 00:08:05.651 "data_offset": 0, 00:08:05.651 "data_size": 65536 00:08:05.651 }, 00:08:05.651 { 00:08:05.651 "name": null, 00:08:05.651 "uuid": "3321eee3-29fa-4304-bc46-9e383a698c8b", 00:08:05.651 "is_configured": false, 00:08:05.651 "data_offset": 0, 00:08:05.651 "data_size": 65536 00:08:05.651 } 00:08:05.651 ] 00:08:05.651 }' 00:08:05.651 23:25:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:05.651 23:25:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:06.225 23:25:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:06.225 23:25:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:06.225 23:25:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:06.225 23:25:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:06.225 23:25:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:06.225 23:25:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:08:06.225 23:25:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:08:06.225 23:25:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:06.225 23:25:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:06.225 [2024-09-30 23:25:45.853021] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:06.225 23:25:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:06.225 23:25:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:06.225 23:25:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:06.225 23:25:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:06.225 23:25:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:06.225 23:25:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:06.225 23:25:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:06.225 23:25:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:06.225 23:25:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:06.225 23:25:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:06.225 23:25:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:06.225 23:25:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:06.225 23:25:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:06.225 23:25:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:06.225 23:25:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:06.225 23:25:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:06.225 23:25:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:06.225 "name": "Existed_Raid", 00:08:06.225 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:06.225 "strip_size_kb": 64, 00:08:06.225 "state": "configuring", 00:08:06.225 "raid_level": "raid0", 00:08:06.225 "superblock": false, 00:08:06.225 "num_base_bdevs": 3, 00:08:06.225 "num_base_bdevs_discovered": 2, 00:08:06.225 "num_base_bdevs_operational": 3, 00:08:06.225 "base_bdevs_list": [ 00:08:06.225 { 00:08:06.225 "name": "BaseBdev1", 00:08:06.225 "uuid": "094ffeb9-c300-4fc1-9c45-5ab99e0aebd8", 00:08:06.225 "is_configured": true, 00:08:06.225 "data_offset": 0, 00:08:06.225 "data_size": 65536 00:08:06.225 }, 00:08:06.225 { 00:08:06.225 "name": null, 00:08:06.225 "uuid": "3fd34890-5075-46ed-931f-c7ab2a7ca007", 00:08:06.225 "is_configured": false, 00:08:06.225 "data_offset": 0, 00:08:06.225 "data_size": 65536 00:08:06.225 }, 00:08:06.225 { 00:08:06.225 "name": "BaseBdev3", 00:08:06.225 "uuid": "3321eee3-29fa-4304-bc46-9e383a698c8b", 00:08:06.225 "is_configured": true, 00:08:06.225 "data_offset": 0, 00:08:06.225 "data_size": 65536 00:08:06.225 } 00:08:06.225 ] 00:08:06.225 }' 00:08:06.225 23:25:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:06.225 23:25:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:06.485 23:25:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:06.485 23:25:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:06.485 23:25:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:06.485 23:25:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:06.485 23:25:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:06.485 23:25:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:08:06.485 23:25:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:06.485 23:25:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:06.485 23:25:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:06.485 [2024-09-30 23:25:46.292300] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:06.485 23:25:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:06.485 23:25:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:06.485 23:25:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:06.485 23:25:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:06.485 23:25:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:06.485 23:25:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:06.485 23:25:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:06.485 23:25:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:06.485 23:25:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:06.485 23:25:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:06.485 23:25:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:06.485 23:25:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:06.485 23:25:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:06.485 23:25:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:06.485 23:25:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:06.485 23:25:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:06.745 23:25:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:06.745 "name": "Existed_Raid", 00:08:06.745 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:06.745 "strip_size_kb": 64, 00:08:06.745 "state": "configuring", 00:08:06.745 "raid_level": "raid0", 00:08:06.745 "superblock": false, 00:08:06.745 "num_base_bdevs": 3, 00:08:06.745 "num_base_bdevs_discovered": 1, 00:08:06.745 "num_base_bdevs_operational": 3, 00:08:06.745 "base_bdevs_list": [ 00:08:06.745 { 00:08:06.745 "name": null, 00:08:06.745 "uuid": "094ffeb9-c300-4fc1-9c45-5ab99e0aebd8", 00:08:06.745 "is_configured": false, 00:08:06.745 "data_offset": 0, 00:08:06.745 "data_size": 65536 00:08:06.745 }, 00:08:06.745 { 00:08:06.745 "name": null, 00:08:06.745 "uuid": "3fd34890-5075-46ed-931f-c7ab2a7ca007", 00:08:06.745 "is_configured": false, 00:08:06.745 "data_offset": 0, 00:08:06.745 "data_size": 65536 00:08:06.745 }, 00:08:06.745 { 00:08:06.745 "name": "BaseBdev3", 00:08:06.745 "uuid": "3321eee3-29fa-4304-bc46-9e383a698c8b", 00:08:06.745 "is_configured": true, 00:08:06.745 "data_offset": 0, 00:08:06.745 "data_size": 65536 00:08:06.745 } 00:08:06.745 ] 00:08:06.745 }' 00:08:06.745 23:25:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:06.745 23:25:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:07.006 23:25:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:07.006 23:25:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:07.006 23:25:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:07.006 23:25:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:07.006 23:25:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:07.006 23:25:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:08:07.006 23:25:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:08:07.006 23:25:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:07.006 23:25:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:07.006 [2024-09-30 23:25:46.770354] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:07.006 23:25:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:07.006 23:25:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:07.006 23:25:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:07.006 23:25:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:07.006 23:25:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:07.006 23:25:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:07.006 23:25:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:07.006 23:25:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:07.006 23:25:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:07.006 23:25:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:07.006 23:25:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:07.006 23:25:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:07.006 23:25:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:07.006 23:25:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:07.006 23:25:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:07.006 23:25:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:07.006 23:25:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:07.006 "name": "Existed_Raid", 00:08:07.006 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:07.006 "strip_size_kb": 64, 00:08:07.006 "state": "configuring", 00:08:07.006 "raid_level": "raid0", 00:08:07.006 "superblock": false, 00:08:07.006 "num_base_bdevs": 3, 00:08:07.006 "num_base_bdevs_discovered": 2, 00:08:07.006 "num_base_bdevs_operational": 3, 00:08:07.006 "base_bdevs_list": [ 00:08:07.006 { 00:08:07.006 "name": null, 00:08:07.006 "uuid": "094ffeb9-c300-4fc1-9c45-5ab99e0aebd8", 00:08:07.006 "is_configured": false, 00:08:07.006 "data_offset": 0, 00:08:07.006 "data_size": 65536 00:08:07.006 }, 00:08:07.006 { 00:08:07.006 "name": "BaseBdev2", 00:08:07.006 "uuid": "3fd34890-5075-46ed-931f-c7ab2a7ca007", 00:08:07.006 "is_configured": true, 00:08:07.006 "data_offset": 0, 00:08:07.006 "data_size": 65536 00:08:07.006 }, 00:08:07.006 { 00:08:07.006 "name": "BaseBdev3", 00:08:07.006 "uuid": "3321eee3-29fa-4304-bc46-9e383a698c8b", 00:08:07.006 "is_configured": true, 00:08:07.006 "data_offset": 0, 00:08:07.006 "data_size": 65536 00:08:07.006 } 00:08:07.006 ] 00:08:07.006 }' 00:08:07.006 23:25:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:07.006 23:25:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:07.577 23:25:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:07.577 23:25:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:07.577 23:25:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:07.577 23:25:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:07.577 23:25:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:07.577 23:25:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:08:07.577 23:25:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:08:07.577 23:25:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:07.577 23:25:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:07.577 23:25:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:07.577 23:25:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:07.577 23:25:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 094ffeb9-c300-4fc1-9c45-5ab99e0aebd8 00:08:07.577 23:25:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:07.577 23:25:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:07.577 [2024-09-30 23:25:47.288559] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:08:07.577 [2024-09-30 23:25:47.288665] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:08:07.577 [2024-09-30 23:25:47.288692] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:08:07.577 [2024-09-30 23:25:47.288990] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:08:07.577 NewBaseBdev 00:08:07.577 [2024-09-30 23:25:47.289151] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:08:07.577 [2024-09-30 23:25:47.289165] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006d00 00:08:07.577 [2024-09-30 23:25:47.289356] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:07.577 23:25:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:07.577 23:25:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:08:07.577 23:25:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:08:07.577 23:25:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:07.577 23:25:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:08:07.577 23:25:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:07.577 23:25:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:07.577 23:25:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:07.577 23:25:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:07.578 23:25:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:07.578 23:25:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:07.578 23:25:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:08:07.578 23:25:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:07.578 23:25:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:07.578 [ 00:08:07.578 { 00:08:07.578 "name": "NewBaseBdev", 00:08:07.578 "aliases": [ 00:08:07.578 "094ffeb9-c300-4fc1-9c45-5ab99e0aebd8" 00:08:07.578 ], 00:08:07.578 "product_name": "Malloc disk", 00:08:07.578 "block_size": 512, 00:08:07.578 "num_blocks": 65536, 00:08:07.578 "uuid": "094ffeb9-c300-4fc1-9c45-5ab99e0aebd8", 00:08:07.578 "assigned_rate_limits": { 00:08:07.578 "rw_ios_per_sec": 0, 00:08:07.578 "rw_mbytes_per_sec": 0, 00:08:07.578 "r_mbytes_per_sec": 0, 00:08:07.578 "w_mbytes_per_sec": 0 00:08:07.578 }, 00:08:07.578 "claimed": true, 00:08:07.578 "claim_type": "exclusive_write", 00:08:07.578 "zoned": false, 00:08:07.578 "supported_io_types": { 00:08:07.578 "read": true, 00:08:07.578 "write": true, 00:08:07.578 "unmap": true, 00:08:07.578 "flush": true, 00:08:07.578 "reset": true, 00:08:07.578 "nvme_admin": false, 00:08:07.578 "nvme_io": false, 00:08:07.578 "nvme_io_md": false, 00:08:07.578 "write_zeroes": true, 00:08:07.578 "zcopy": true, 00:08:07.578 "get_zone_info": false, 00:08:07.578 "zone_management": false, 00:08:07.578 "zone_append": false, 00:08:07.578 "compare": false, 00:08:07.578 "compare_and_write": false, 00:08:07.578 "abort": true, 00:08:07.578 "seek_hole": false, 00:08:07.578 "seek_data": false, 00:08:07.578 "copy": true, 00:08:07.578 "nvme_iov_md": false 00:08:07.578 }, 00:08:07.578 "memory_domains": [ 00:08:07.578 { 00:08:07.578 "dma_device_id": "system", 00:08:07.578 "dma_device_type": 1 00:08:07.578 }, 00:08:07.578 { 00:08:07.578 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:07.578 "dma_device_type": 2 00:08:07.578 } 00:08:07.578 ], 00:08:07.578 "driver_specific": {} 00:08:07.578 } 00:08:07.578 ] 00:08:07.578 23:25:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:07.578 23:25:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:08:07.578 23:25:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:08:07.578 23:25:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:07.578 23:25:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:07.578 23:25:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:07.578 23:25:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:07.578 23:25:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:07.578 23:25:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:07.578 23:25:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:07.578 23:25:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:07.578 23:25:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:07.578 23:25:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:07.578 23:25:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:07.578 23:25:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:07.578 23:25:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:07.578 23:25:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:07.578 23:25:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:07.578 "name": "Existed_Raid", 00:08:07.578 "uuid": "dafb9ee6-e75d-4da5-a17a-23c2dc2401de", 00:08:07.578 "strip_size_kb": 64, 00:08:07.578 "state": "online", 00:08:07.578 "raid_level": "raid0", 00:08:07.578 "superblock": false, 00:08:07.578 "num_base_bdevs": 3, 00:08:07.578 "num_base_bdevs_discovered": 3, 00:08:07.578 "num_base_bdevs_operational": 3, 00:08:07.578 "base_bdevs_list": [ 00:08:07.578 { 00:08:07.578 "name": "NewBaseBdev", 00:08:07.578 "uuid": "094ffeb9-c300-4fc1-9c45-5ab99e0aebd8", 00:08:07.578 "is_configured": true, 00:08:07.578 "data_offset": 0, 00:08:07.578 "data_size": 65536 00:08:07.578 }, 00:08:07.578 { 00:08:07.578 "name": "BaseBdev2", 00:08:07.578 "uuid": "3fd34890-5075-46ed-931f-c7ab2a7ca007", 00:08:07.578 "is_configured": true, 00:08:07.578 "data_offset": 0, 00:08:07.578 "data_size": 65536 00:08:07.578 }, 00:08:07.578 { 00:08:07.578 "name": "BaseBdev3", 00:08:07.578 "uuid": "3321eee3-29fa-4304-bc46-9e383a698c8b", 00:08:07.578 "is_configured": true, 00:08:07.578 "data_offset": 0, 00:08:07.578 "data_size": 65536 00:08:07.578 } 00:08:07.578 ] 00:08:07.578 }' 00:08:07.578 23:25:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:07.578 23:25:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:08.146 23:25:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:08:08.146 23:25:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:08.146 23:25:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:08.146 23:25:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:08.146 23:25:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:08.146 23:25:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:08.146 23:25:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:08.146 23:25:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:08.146 23:25:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:08.146 23:25:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:08.146 [2024-09-30 23:25:47.736123] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:08.146 23:25:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:08.146 23:25:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:08.146 "name": "Existed_Raid", 00:08:08.146 "aliases": [ 00:08:08.146 "dafb9ee6-e75d-4da5-a17a-23c2dc2401de" 00:08:08.146 ], 00:08:08.146 "product_name": "Raid Volume", 00:08:08.146 "block_size": 512, 00:08:08.146 "num_blocks": 196608, 00:08:08.146 "uuid": "dafb9ee6-e75d-4da5-a17a-23c2dc2401de", 00:08:08.146 "assigned_rate_limits": { 00:08:08.146 "rw_ios_per_sec": 0, 00:08:08.146 "rw_mbytes_per_sec": 0, 00:08:08.146 "r_mbytes_per_sec": 0, 00:08:08.146 "w_mbytes_per_sec": 0 00:08:08.146 }, 00:08:08.146 "claimed": false, 00:08:08.146 "zoned": false, 00:08:08.146 "supported_io_types": { 00:08:08.146 "read": true, 00:08:08.146 "write": true, 00:08:08.146 "unmap": true, 00:08:08.146 "flush": true, 00:08:08.146 "reset": true, 00:08:08.146 "nvme_admin": false, 00:08:08.146 "nvme_io": false, 00:08:08.146 "nvme_io_md": false, 00:08:08.146 "write_zeroes": true, 00:08:08.146 "zcopy": false, 00:08:08.146 "get_zone_info": false, 00:08:08.146 "zone_management": false, 00:08:08.146 "zone_append": false, 00:08:08.146 "compare": false, 00:08:08.146 "compare_and_write": false, 00:08:08.146 "abort": false, 00:08:08.146 "seek_hole": false, 00:08:08.146 "seek_data": false, 00:08:08.146 "copy": false, 00:08:08.146 "nvme_iov_md": false 00:08:08.146 }, 00:08:08.146 "memory_domains": [ 00:08:08.146 { 00:08:08.146 "dma_device_id": "system", 00:08:08.146 "dma_device_type": 1 00:08:08.146 }, 00:08:08.146 { 00:08:08.146 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:08.146 "dma_device_type": 2 00:08:08.146 }, 00:08:08.146 { 00:08:08.146 "dma_device_id": "system", 00:08:08.146 "dma_device_type": 1 00:08:08.146 }, 00:08:08.146 { 00:08:08.146 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:08.146 "dma_device_type": 2 00:08:08.146 }, 00:08:08.146 { 00:08:08.146 "dma_device_id": "system", 00:08:08.146 "dma_device_type": 1 00:08:08.146 }, 00:08:08.146 { 00:08:08.146 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:08.146 "dma_device_type": 2 00:08:08.146 } 00:08:08.146 ], 00:08:08.146 "driver_specific": { 00:08:08.146 "raid": { 00:08:08.146 "uuid": "dafb9ee6-e75d-4da5-a17a-23c2dc2401de", 00:08:08.146 "strip_size_kb": 64, 00:08:08.146 "state": "online", 00:08:08.146 "raid_level": "raid0", 00:08:08.146 "superblock": false, 00:08:08.146 "num_base_bdevs": 3, 00:08:08.146 "num_base_bdevs_discovered": 3, 00:08:08.146 "num_base_bdevs_operational": 3, 00:08:08.146 "base_bdevs_list": [ 00:08:08.146 { 00:08:08.146 "name": "NewBaseBdev", 00:08:08.146 "uuid": "094ffeb9-c300-4fc1-9c45-5ab99e0aebd8", 00:08:08.146 "is_configured": true, 00:08:08.146 "data_offset": 0, 00:08:08.146 "data_size": 65536 00:08:08.146 }, 00:08:08.146 { 00:08:08.146 "name": "BaseBdev2", 00:08:08.146 "uuid": "3fd34890-5075-46ed-931f-c7ab2a7ca007", 00:08:08.146 "is_configured": true, 00:08:08.146 "data_offset": 0, 00:08:08.146 "data_size": 65536 00:08:08.146 }, 00:08:08.146 { 00:08:08.146 "name": "BaseBdev3", 00:08:08.146 "uuid": "3321eee3-29fa-4304-bc46-9e383a698c8b", 00:08:08.146 "is_configured": true, 00:08:08.146 "data_offset": 0, 00:08:08.146 "data_size": 65536 00:08:08.146 } 00:08:08.146 ] 00:08:08.146 } 00:08:08.146 } 00:08:08.146 }' 00:08:08.146 23:25:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:08.146 23:25:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:08:08.146 BaseBdev2 00:08:08.146 BaseBdev3' 00:08:08.146 23:25:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:08.146 23:25:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:08.146 23:25:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:08.146 23:25:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:08.146 23:25:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:08:08.146 23:25:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:08.146 23:25:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:08.146 23:25:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:08.146 23:25:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:08.146 23:25:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:08.146 23:25:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:08.146 23:25:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:08.146 23:25:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:08.147 23:25:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:08.147 23:25:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:08.147 23:25:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:08.147 23:25:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:08.147 23:25:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:08.147 23:25:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:08.147 23:25:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:08.147 23:25:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:08.147 23:25:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:08.147 23:25:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:08.147 23:25:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:08.147 23:25:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:08.147 23:25:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:08.147 23:25:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:08.147 23:25:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:08.147 23:25:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:08.406 [2024-09-30 23:25:48.003395] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:08.406 [2024-09-30 23:25:48.003466] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:08.406 [2024-09-30 23:25:48.003545] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:08.406 [2024-09-30 23:25:48.003623] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:08.406 [2024-09-30 23:25:48.003656] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name Existed_Raid, state offline 00:08:08.406 23:25:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:08.406 23:25:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 75089 00:08:08.406 23:25:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 75089 ']' 00:08:08.406 23:25:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 75089 00:08:08.406 23:25:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:08:08.406 23:25:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:08.406 23:25:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 75089 00:08:08.406 killing process with pid 75089 00:08:08.406 23:25:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:08.406 23:25:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:08.406 23:25:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 75089' 00:08:08.406 23:25:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 75089 00:08:08.406 [2024-09-30 23:25:48.055481] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:08.406 23:25:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 75089 00:08:08.406 [2024-09-30 23:25:48.086844] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:08.665 23:25:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:08:08.665 ************************************ 00:08:08.665 END TEST raid_state_function_test 00:08:08.665 ************************************ 00:08:08.665 00:08:08.665 real 0m8.489s 00:08:08.665 user 0m14.382s 00:08:08.665 sys 0m1.786s 00:08:08.665 23:25:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:08.665 23:25:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:08.665 23:25:48 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 3 true 00:08:08.665 23:25:48 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:08:08.665 23:25:48 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:08.665 23:25:48 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:08.665 ************************************ 00:08:08.665 START TEST raid_state_function_test_sb 00:08:08.665 ************************************ 00:08:08.665 23:25:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test raid0 3 true 00:08:08.665 23:25:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:08:08.665 23:25:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:08:08.665 23:25:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:08:08.665 23:25:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:08.665 23:25:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:08.665 23:25:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:08.665 23:25:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:08.665 23:25:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:08.665 23:25:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:08.665 23:25:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:08.665 23:25:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:08.665 23:25:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:08.665 23:25:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:08:08.665 23:25:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:08.665 23:25:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:08.665 23:25:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:08:08.665 23:25:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:08.665 23:25:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:08.665 23:25:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:08.665 23:25:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:08.665 23:25:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:08.665 23:25:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:08:08.665 23:25:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:08:08.665 23:25:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:08:08.665 23:25:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:08:08.665 Process raid pid: 75688 00:08:08.665 23:25:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:08:08.665 23:25:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=75688 00:08:08.665 23:25:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:08.665 23:25:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 75688' 00:08:08.665 23:25:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 75688 00:08:08.665 23:25:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 75688 ']' 00:08:08.665 23:25:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:08.665 23:25:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:08.665 23:25:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:08.665 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:08.665 23:25:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:08.665 23:25:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:08.665 [2024-09-30 23:25:48.512265] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:08:08.665 [2024-09-30 23:25:48.512506] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:08.923 [2024-09-30 23:25:48.679492] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:08.923 [2024-09-30 23:25:48.723835] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:08.923 [2024-09-30 23:25:48.766344] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:08.923 [2024-09-30 23:25:48.766465] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:09.491 23:25:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:09.491 23:25:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:08:09.491 23:25:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:09.491 23:25:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:09.491 23:25:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:09.491 [2024-09-30 23:25:49.319928] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:09.491 [2024-09-30 23:25:49.319978] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:09.491 [2024-09-30 23:25:49.319992] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:09.491 [2024-09-30 23:25:49.320002] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:09.491 [2024-09-30 23:25:49.320008] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:09.491 [2024-09-30 23:25:49.320021] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:09.491 23:25:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:09.491 23:25:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:09.491 23:25:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:09.491 23:25:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:09.491 23:25:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:09.491 23:25:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:09.491 23:25:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:09.491 23:25:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:09.491 23:25:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:09.491 23:25:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:09.491 23:25:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:09.491 23:25:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:09.491 23:25:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:09.491 23:25:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:09.491 23:25:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:09.750 23:25:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:09.750 23:25:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:09.750 "name": "Existed_Raid", 00:08:09.750 "uuid": "c982d495-cb1f-4fb0-8b1f-ddbfb57d35f8", 00:08:09.750 "strip_size_kb": 64, 00:08:09.750 "state": "configuring", 00:08:09.750 "raid_level": "raid0", 00:08:09.750 "superblock": true, 00:08:09.750 "num_base_bdevs": 3, 00:08:09.750 "num_base_bdevs_discovered": 0, 00:08:09.750 "num_base_bdevs_operational": 3, 00:08:09.750 "base_bdevs_list": [ 00:08:09.750 { 00:08:09.750 "name": "BaseBdev1", 00:08:09.750 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:09.750 "is_configured": false, 00:08:09.750 "data_offset": 0, 00:08:09.750 "data_size": 0 00:08:09.750 }, 00:08:09.750 { 00:08:09.750 "name": "BaseBdev2", 00:08:09.750 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:09.750 "is_configured": false, 00:08:09.750 "data_offset": 0, 00:08:09.750 "data_size": 0 00:08:09.750 }, 00:08:09.750 { 00:08:09.750 "name": "BaseBdev3", 00:08:09.750 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:09.750 "is_configured": false, 00:08:09.750 "data_offset": 0, 00:08:09.750 "data_size": 0 00:08:09.750 } 00:08:09.750 ] 00:08:09.750 }' 00:08:09.750 23:25:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:09.750 23:25:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:10.009 23:25:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:10.009 23:25:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:10.009 23:25:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:10.009 [2024-09-30 23:25:49.775000] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:10.009 [2024-09-30 23:25:49.775121] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring 00:08:10.009 23:25:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:10.009 23:25:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:10.009 23:25:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:10.009 23:25:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:10.009 [2024-09-30 23:25:49.787016] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:10.009 [2024-09-30 23:25:49.787117] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:10.009 [2024-09-30 23:25:49.787144] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:10.010 [2024-09-30 23:25:49.787166] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:10.010 [2024-09-30 23:25:49.787184] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:10.010 [2024-09-30 23:25:49.787204] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:10.010 23:25:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:10.010 23:25:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:10.010 23:25:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:10.010 23:25:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:10.010 [2024-09-30 23:25:49.807731] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:10.010 BaseBdev1 00:08:10.010 23:25:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:10.010 23:25:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:10.010 23:25:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:08:10.010 23:25:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:10.010 23:25:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:08:10.010 23:25:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:10.010 23:25:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:10.010 23:25:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:10.010 23:25:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:10.010 23:25:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:10.010 23:25:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:10.010 23:25:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:10.010 23:25:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:10.010 23:25:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:10.010 [ 00:08:10.010 { 00:08:10.010 "name": "BaseBdev1", 00:08:10.010 "aliases": [ 00:08:10.010 "da23bae4-853f-4a02-82d3-7f36871a38c7" 00:08:10.010 ], 00:08:10.010 "product_name": "Malloc disk", 00:08:10.010 "block_size": 512, 00:08:10.010 "num_blocks": 65536, 00:08:10.010 "uuid": "da23bae4-853f-4a02-82d3-7f36871a38c7", 00:08:10.010 "assigned_rate_limits": { 00:08:10.010 "rw_ios_per_sec": 0, 00:08:10.010 "rw_mbytes_per_sec": 0, 00:08:10.010 "r_mbytes_per_sec": 0, 00:08:10.010 "w_mbytes_per_sec": 0 00:08:10.010 }, 00:08:10.010 "claimed": true, 00:08:10.010 "claim_type": "exclusive_write", 00:08:10.010 "zoned": false, 00:08:10.010 "supported_io_types": { 00:08:10.010 "read": true, 00:08:10.010 "write": true, 00:08:10.010 "unmap": true, 00:08:10.010 "flush": true, 00:08:10.010 "reset": true, 00:08:10.010 "nvme_admin": false, 00:08:10.010 "nvme_io": false, 00:08:10.010 "nvme_io_md": false, 00:08:10.010 "write_zeroes": true, 00:08:10.010 "zcopy": true, 00:08:10.010 "get_zone_info": false, 00:08:10.010 "zone_management": false, 00:08:10.010 "zone_append": false, 00:08:10.010 "compare": false, 00:08:10.010 "compare_and_write": false, 00:08:10.010 "abort": true, 00:08:10.010 "seek_hole": false, 00:08:10.010 "seek_data": false, 00:08:10.010 "copy": true, 00:08:10.010 "nvme_iov_md": false 00:08:10.010 }, 00:08:10.010 "memory_domains": [ 00:08:10.010 { 00:08:10.010 "dma_device_id": "system", 00:08:10.010 "dma_device_type": 1 00:08:10.010 }, 00:08:10.010 { 00:08:10.010 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:10.010 "dma_device_type": 2 00:08:10.010 } 00:08:10.010 ], 00:08:10.010 "driver_specific": {} 00:08:10.010 } 00:08:10.010 ] 00:08:10.010 23:25:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:10.010 23:25:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:08:10.010 23:25:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:10.010 23:25:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:10.010 23:25:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:10.010 23:25:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:10.010 23:25:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:10.010 23:25:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:10.010 23:25:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:10.010 23:25:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:10.010 23:25:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:10.010 23:25:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:10.010 23:25:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:10.010 23:25:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:10.010 23:25:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:10.010 23:25:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:10.269 23:25:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:10.269 23:25:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:10.269 "name": "Existed_Raid", 00:08:10.269 "uuid": "475c86af-8c67-4e17-a42e-dc1e06dd2a2f", 00:08:10.269 "strip_size_kb": 64, 00:08:10.269 "state": "configuring", 00:08:10.269 "raid_level": "raid0", 00:08:10.269 "superblock": true, 00:08:10.270 "num_base_bdevs": 3, 00:08:10.270 "num_base_bdevs_discovered": 1, 00:08:10.270 "num_base_bdevs_operational": 3, 00:08:10.270 "base_bdevs_list": [ 00:08:10.270 { 00:08:10.270 "name": "BaseBdev1", 00:08:10.270 "uuid": "da23bae4-853f-4a02-82d3-7f36871a38c7", 00:08:10.270 "is_configured": true, 00:08:10.270 "data_offset": 2048, 00:08:10.270 "data_size": 63488 00:08:10.270 }, 00:08:10.270 { 00:08:10.270 "name": "BaseBdev2", 00:08:10.270 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:10.270 "is_configured": false, 00:08:10.270 "data_offset": 0, 00:08:10.270 "data_size": 0 00:08:10.270 }, 00:08:10.270 { 00:08:10.270 "name": "BaseBdev3", 00:08:10.270 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:10.270 "is_configured": false, 00:08:10.270 "data_offset": 0, 00:08:10.270 "data_size": 0 00:08:10.270 } 00:08:10.270 ] 00:08:10.270 }' 00:08:10.270 23:25:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:10.270 23:25:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:10.530 23:25:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:10.530 23:25:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:10.530 23:25:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:10.530 [2024-09-30 23:25:50.294916] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:10.530 [2024-09-30 23:25:50.294960] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring 00:08:10.530 23:25:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:10.530 23:25:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:10.530 23:25:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:10.530 23:25:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:10.530 [2024-09-30 23:25:50.306945] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:10.530 [2024-09-30 23:25:50.308784] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:10.530 [2024-09-30 23:25:50.308830] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:10.530 [2024-09-30 23:25:50.308840] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:10.530 [2024-09-30 23:25:50.308850] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:10.530 23:25:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:10.530 23:25:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:10.530 23:25:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:10.530 23:25:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:10.530 23:25:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:10.530 23:25:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:10.530 23:25:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:10.530 23:25:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:10.530 23:25:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:10.530 23:25:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:10.530 23:25:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:10.530 23:25:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:10.530 23:25:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:10.530 23:25:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:10.530 23:25:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:10.530 23:25:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:10.530 23:25:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:10.530 23:25:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:10.530 23:25:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:10.530 "name": "Existed_Raid", 00:08:10.530 "uuid": "49f54914-b153-4077-a31c-0336437f04c2", 00:08:10.530 "strip_size_kb": 64, 00:08:10.530 "state": "configuring", 00:08:10.530 "raid_level": "raid0", 00:08:10.530 "superblock": true, 00:08:10.530 "num_base_bdevs": 3, 00:08:10.530 "num_base_bdevs_discovered": 1, 00:08:10.530 "num_base_bdevs_operational": 3, 00:08:10.530 "base_bdevs_list": [ 00:08:10.530 { 00:08:10.530 "name": "BaseBdev1", 00:08:10.530 "uuid": "da23bae4-853f-4a02-82d3-7f36871a38c7", 00:08:10.530 "is_configured": true, 00:08:10.530 "data_offset": 2048, 00:08:10.530 "data_size": 63488 00:08:10.530 }, 00:08:10.530 { 00:08:10.530 "name": "BaseBdev2", 00:08:10.530 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:10.530 "is_configured": false, 00:08:10.530 "data_offset": 0, 00:08:10.530 "data_size": 0 00:08:10.530 }, 00:08:10.530 { 00:08:10.530 "name": "BaseBdev3", 00:08:10.530 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:10.530 "is_configured": false, 00:08:10.530 "data_offset": 0, 00:08:10.530 "data_size": 0 00:08:10.530 } 00:08:10.530 ] 00:08:10.530 }' 00:08:10.530 23:25:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:10.530 23:25:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:11.100 23:25:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:11.100 23:25:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:11.100 23:25:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:11.100 [2024-09-30 23:25:50.711403] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:11.100 BaseBdev2 00:08:11.100 23:25:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:11.100 23:25:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:11.100 23:25:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:08:11.100 23:25:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:11.100 23:25:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:08:11.100 23:25:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:11.100 23:25:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:11.100 23:25:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:11.100 23:25:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:11.100 23:25:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:11.100 23:25:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:11.100 23:25:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:11.100 23:25:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:11.100 23:25:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:11.100 [ 00:08:11.100 { 00:08:11.100 "name": "BaseBdev2", 00:08:11.100 "aliases": [ 00:08:11.100 "7e5fc226-529c-4d24-a23f-5d20295b9312" 00:08:11.100 ], 00:08:11.100 "product_name": "Malloc disk", 00:08:11.100 "block_size": 512, 00:08:11.100 "num_blocks": 65536, 00:08:11.100 "uuid": "7e5fc226-529c-4d24-a23f-5d20295b9312", 00:08:11.100 "assigned_rate_limits": { 00:08:11.100 "rw_ios_per_sec": 0, 00:08:11.100 "rw_mbytes_per_sec": 0, 00:08:11.100 "r_mbytes_per_sec": 0, 00:08:11.100 "w_mbytes_per_sec": 0 00:08:11.100 }, 00:08:11.100 "claimed": true, 00:08:11.100 "claim_type": "exclusive_write", 00:08:11.100 "zoned": false, 00:08:11.100 "supported_io_types": { 00:08:11.100 "read": true, 00:08:11.100 "write": true, 00:08:11.100 "unmap": true, 00:08:11.100 "flush": true, 00:08:11.100 "reset": true, 00:08:11.100 "nvme_admin": false, 00:08:11.100 "nvme_io": false, 00:08:11.100 "nvme_io_md": false, 00:08:11.100 "write_zeroes": true, 00:08:11.100 "zcopy": true, 00:08:11.100 "get_zone_info": false, 00:08:11.100 "zone_management": false, 00:08:11.100 "zone_append": false, 00:08:11.100 "compare": false, 00:08:11.100 "compare_and_write": false, 00:08:11.100 "abort": true, 00:08:11.100 "seek_hole": false, 00:08:11.100 "seek_data": false, 00:08:11.100 "copy": true, 00:08:11.100 "nvme_iov_md": false 00:08:11.100 }, 00:08:11.100 "memory_domains": [ 00:08:11.100 { 00:08:11.100 "dma_device_id": "system", 00:08:11.100 "dma_device_type": 1 00:08:11.100 }, 00:08:11.100 { 00:08:11.100 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:11.100 "dma_device_type": 2 00:08:11.100 } 00:08:11.100 ], 00:08:11.100 "driver_specific": {} 00:08:11.100 } 00:08:11.100 ] 00:08:11.100 23:25:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:11.100 23:25:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:08:11.100 23:25:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:11.100 23:25:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:11.100 23:25:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:11.100 23:25:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:11.100 23:25:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:11.100 23:25:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:11.100 23:25:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:11.100 23:25:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:11.100 23:25:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:11.100 23:25:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:11.101 23:25:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:11.101 23:25:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:11.101 23:25:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:11.101 23:25:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:11.101 23:25:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:11.101 23:25:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:11.101 23:25:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:11.101 23:25:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:11.101 "name": "Existed_Raid", 00:08:11.101 "uuid": "49f54914-b153-4077-a31c-0336437f04c2", 00:08:11.101 "strip_size_kb": 64, 00:08:11.101 "state": "configuring", 00:08:11.101 "raid_level": "raid0", 00:08:11.101 "superblock": true, 00:08:11.101 "num_base_bdevs": 3, 00:08:11.101 "num_base_bdevs_discovered": 2, 00:08:11.101 "num_base_bdevs_operational": 3, 00:08:11.101 "base_bdevs_list": [ 00:08:11.101 { 00:08:11.101 "name": "BaseBdev1", 00:08:11.101 "uuid": "da23bae4-853f-4a02-82d3-7f36871a38c7", 00:08:11.101 "is_configured": true, 00:08:11.101 "data_offset": 2048, 00:08:11.101 "data_size": 63488 00:08:11.101 }, 00:08:11.101 { 00:08:11.101 "name": "BaseBdev2", 00:08:11.101 "uuid": "7e5fc226-529c-4d24-a23f-5d20295b9312", 00:08:11.101 "is_configured": true, 00:08:11.101 "data_offset": 2048, 00:08:11.101 "data_size": 63488 00:08:11.101 }, 00:08:11.101 { 00:08:11.101 "name": "BaseBdev3", 00:08:11.101 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:11.101 "is_configured": false, 00:08:11.101 "data_offset": 0, 00:08:11.101 "data_size": 0 00:08:11.101 } 00:08:11.101 ] 00:08:11.101 }' 00:08:11.101 23:25:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:11.101 23:25:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:11.361 23:25:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:11.361 23:25:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:11.361 23:25:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:11.361 [2024-09-30 23:25:51.165709] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:11.361 [2024-09-30 23:25:51.165917] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:08:11.361 [2024-09-30 23:25:51.165943] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:11.361 BaseBdev3 00:08:11.361 [2024-09-30 23:25:51.166236] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:08:11.361 [2024-09-30 23:25:51.166375] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:08:11.361 [2024-09-30 23:25:51.166391] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980 00:08:11.361 [2024-09-30 23:25:51.166515] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:11.361 23:25:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:11.361 23:25:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:08:11.361 23:25:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:08:11.361 23:25:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:11.361 23:25:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:08:11.361 23:25:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:11.361 23:25:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:11.361 23:25:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:11.361 23:25:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:11.361 23:25:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:11.361 23:25:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:11.361 23:25:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:11.361 23:25:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:11.361 23:25:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:11.361 [ 00:08:11.361 { 00:08:11.361 "name": "BaseBdev3", 00:08:11.361 "aliases": [ 00:08:11.361 "1bd93639-1ca8-4ae3-b67e-cf4ca54f6750" 00:08:11.361 ], 00:08:11.361 "product_name": "Malloc disk", 00:08:11.361 "block_size": 512, 00:08:11.361 "num_blocks": 65536, 00:08:11.361 "uuid": "1bd93639-1ca8-4ae3-b67e-cf4ca54f6750", 00:08:11.361 "assigned_rate_limits": { 00:08:11.361 "rw_ios_per_sec": 0, 00:08:11.361 "rw_mbytes_per_sec": 0, 00:08:11.361 "r_mbytes_per_sec": 0, 00:08:11.361 "w_mbytes_per_sec": 0 00:08:11.361 }, 00:08:11.361 "claimed": true, 00:08:11.361 "claim_type": "exclusive_write", 00:08:11.361 "zoned": false, 00:08:11.361 "supported_io_types": { 00:08:11.361 "read": true, 00:08:11.361 "write": true, 00:08:11.361 "unmap": true, 00:08:11.361 "flush": true, 00:08:11.361 "reset": true, 00:08:11.361 "nvme_admin": false, 00:08:11.361 "nvme_io": false, 00:08:11.361 "nvme_io_md": false, 00:08:11.361 "write_zeroes": true, 00:08:11.361 "zcopy": true, 00:08:11.361 "get_zone_info": false, 00:08:11.361 "zone_management": false, 00:08:11.361 "zone_append": false, 00:08:11.361 "compare": false, 00:08:11.361 "compare_and_write": false, 00:08:11.361 "abort": true, 00:08:11.361 "seek_hole": false, 00:08:11.361 "seek_data": false, 00:08:11.361 "copy": true, 00:08:11.361 "nvme_iov_md": false 00:08:11.361 }, 00:08:11.361 "memory_domains": [ 00:08:11.361 { 00:08:11.361 "dma_device_id": "system", 00:08:11.361 "dma_device_type": 1 00:08:11.361 }, 00:08:11.361 { 00:08:11.361 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:11.361 "dma_device_type": 2 00:08:11.361 } 00:08:11.361 ], 00:08:11.361 "driver_specific": {} 00:08:11.361 } 00:08:11.361 ] 00:08:11.361 23:25:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:11.361 23:25:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:08:11.361 23:25:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:11.361 23:25:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:11.361 23:25:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:08:11.361 23:25:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:11.361 23:25:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:11.361 23:25:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:11.361 23:25:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:11.361 23:25:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:11.361 23:25:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:11.361 23:25:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:11.361 23:25:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:11.361 23:25:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:11.361 23:25:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:11.361 23:25:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:11.361 23:25:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:11.361 23:25:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:11.621 23:25:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:11.621 23:25:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:11.621 "name": "Existed_Raid", 00:08:11.621 "uuid": "49f54914-b153-4077-a31c-0336437f04c2", 00:08:11.621 "strip_size_kb": 64, 00:08:11.621 "state": "online", 00:08:11.621 "raid_level": "raid0", 00:08:11.621 "superblock": true, 00:08:11.621 "num_base_bdevs": 3, 00:08:11.621 "num_base_bdevs_discovered": 3, 00:08:11.621 "num_base_bdevs_operational": 3, 00:08:11.621 "base_bdevs_list": [ 00:08:11.621 { 00:08:11.621 "name": "BaseBdev1", 00:08:11.621 "uuid": "da23bae4-853f-4a02-82d3-7f36871a38c7", 00:08:11.621 "is_configured": true, 00:08:11.621 "data_offset": 2048, 00:08:11.621 "data_size": 63488 00:08:11.621 }, 00:08:11.621 { 00:08:11.621 "name": "BaseBdev2", 00:08:11.621 "uuid": "7e5fc226-529c-4d24-a23f-5d20295b9312", 00:08:11.621 "is_configured": true, 00:08:11.621 "data_offset": 2048, 00:08:11.621 "data_size": 63488 00:08:11.621 }, 00:08:11.621 { 00:08:11.621 "name": "BaseBdev3", 00:08:11.621 "uuid": "1bd93639-1ca8-4ae3-b67e-cf4ca54f6750", 00:08:11.621 "is_configured": true, 00:08:11.621 "data_offset": 2048, 00:08:11.621 "data_size": 63488 00:08:11.621 } 00:08:11.621 ] 00:08:11.621 }' 00:08:11.621 23:25:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:11.621 23:25:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:11.881 23:25:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:11.881 23:25:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:11.881 23:25:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:11.881 23:25:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:11.881 23:25:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:08:11.881 23:25:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:11.881 23:25:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:11.881 23:25:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:11.881 23:25:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:11.881 23:25:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:11.881 [2024-09-30 23:25:51.609292] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:11.881 23:25:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:11.881 23:25:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:11.881 "name": "Existed_Raid", 00:08:11.881 "aliases": [ 00:08:11.881 "49f54914-b153-4077-a31c-0336437f04c2" 00:08:11.881 ], 00:08:11.881 "product_name": "Raid Volume", 00:08:11.881 "block_size": 512, 00:08:11.881 "num_blocks": 190464, 00:08:11.881 "uuid": "49f54914-b153-4077-a31c-0336437f04c2", 00:08:11.881 "assigned_rate_limits": { 00:08:11.881 "rw_ios_per_sec": 0, 00:08:11.881 "rw_mbytes_per_sec": 0, 00:08:11.881 "r_mbytes_per_sec": 0, 00:08:11.881 "w_mbytes_per_sec": 0 00:08:11.881 }, 00:08:11.881 "claimed": false, 00:08:11.881 "zoned": false, 00:08:11.881 "supported_io_types": { 00:08:11.881 "read": true, 00:08:11.881 "write": true, 00:08:11.881 "unmap": true, 00:08:11.881 "flush": true, 00:08:11.881 "reset": true, 00:08:11.881 "nvme_admin": false, 00:08:11.881 "nvme_io": false, 00:08:11.881 "nvme_io_md": false, 00:08:11.881 "write_zeroes": true, 00:08:11.881 "zcopy": false, 00:08:11.881 "get_zone_info": false, 00:08:11.881 "zone_management": false, 00:08:11.881 "zone_append": false, 00:08:11.881 "compare": false, 00:08:11.882 "compare_and_write": false, 00:08:11.882 "abort": false, 00:08:11.882 "seek_hole": false, 00:08:11.882 "seek_data": false, 00:08:11.882 "copy": false, 00:08:11.882 "nvme_iov_md": false 00:08:11.882 }, 00:08:11.882 "memory_domains": [ 00:08:11.882 { 00:08:11.882 "dma_device_id": "system", 00:08:11.882 "dma_device_type": 1 00:08:11.882 }, 00:08:11.882 { 00:08:11.882 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:11.882 "dma_device_type": 2 00:08:11.882 }, 00:08:11.882 { 00:08:11.882 "dma_device_id": "system", 00:08:11.882 "dma_device_type": 1 00:08:11.882 }, 00:08:11.882 { 00:08:11.882 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:11.882 "dma_device_type": 2 00:08:11.882 }, 00:08:11.882 { 00:08:11.882 "dma_device_id": "system", 00:08:11.882 "dma_device_type": 1 00:08:11.882 }, 00:08:11.882 { 00:08:11.882 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:11.882 "dma_device_type": 2 00:08:11.882 } 00:08:11.882 ], 00:08:11.882 "driver_specific": { 00:08:11.882 "raid": { 00:08:11.882 "uuid": "49f54914-b153-4077-a31c-0336437f04c2", 00:08:11.882 "strip_size_kb": 64, 00:08:11.882 "state": "online", 00:08:11.882 "raid_level": "raid0", 00:08:11.882 "superblock": true, 00:08:11.882 "num_base_bdevs": 3, 00:08:11.882 "num_base_bdevs_discovered": 3, 00:08:11.882 "num_base_bdevs_operational": 3, 00:08:11.882 "base_bdevs_list": [ 00:08:11.882 { 00:08:11.882 "name": "BaseBdev1", 00:08:11.882 "uuid": "da23bae4-853f-4a02-82d3-7f36871a38c7", 00:08:11.882 "is_configured": true, 00:08:11.882 "data_offset": 2048, 00:08:11.882 "data_size": 63488 00:08:11.882 }, 00:08:11.882 { 00:08:11.882 "name": "BaseBdev2", 00:08:11.882 "uuid": "7e5fc226-529c-4d24-a23f-5d20295b9312", 00:08:11.882 "is_configured": true, 00:08:11.882 "data_offset": 2048, 00:08:11.882 "data_size": 63488 00:08:11.882 }, 00:08:11.882 { 00:08:11.882 "name": "BaseBdev3", 00:08:11.882 "uuid": "1bd93639-1ca8-4ae3-b67e-cf4ca54f6750", 00:08:11.882 "is_configured": true, 00:08:11.882 "data_offset": 2048, 00:08:11.882 "data_size": 63488 00:08:11.882 } 00:08:11.882 ] 00:08:11.882 } 00:08:11.882 } 00:08:11.882 }' 00:08:11.882 23:25:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:11.882 23:25:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:11.882 BaseBdev2 00:08:11.882 BaseBdev3' 00:08:11.882 23:25:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:12.143 23:25:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:12.143 23:25:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:12.143 23:25:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:12.143 23:25:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:12.143 23:25:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:12.143 23:25:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:12.143 23:25:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:12.143 23:25:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:12.143 23:25:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:12.143 23:25:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:12.143 23:25:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:12.143 23:25:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:12.143 23:25:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:12.143 23:25:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:12.143 23:25:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:12.143 23:25:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:12.143 23:25:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:12.143 23:25:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:12.143 23:25:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:12.143 23:25:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:12.143 23:25:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:12.143 23:25:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:12.143 23:25:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:12.143 23:25:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:12.143 23:25:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:12.143 23:25:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:12.143 23:25:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:12.143 23:25:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:12.143 [2024-09-30 23:25:51.908543] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:12.143 [2024-09-30 23:25:51.908621] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:12.143 [2024-09-30 23:25:51.908705] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:12.143 23:25:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:12.143 23:25:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:12.143 23:25:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:08:12.143 23:25:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:12.143 23:25:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:08:12.143 23:25:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:08:12.143 23:25:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:08:12.143 23:25:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:12.143 23:25:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:08:12.143 23:25:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:12.143 23:25:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:12.143 23:25:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:12.143 23:25:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:12.143 23:25:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:12.143 23:25:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:12.143 23:25:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:12.143 23:25:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:12.143 23:25:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:12.143 23:25:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:12.143 23:25:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:12.143 23:25:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:12.143 23:25:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:12.143 "name": "Existed_Raid", 00:08:12.143 "uuid": "49f54914-b153-4077-a31c-0336437f04c2", 00:08:12.143 "strip_size_kb": 64, 00:08:12.143 "state": "offline", 00:08:12.143 "raid_level": "raid0", 00:08:12.143 "superblock": true, 00:08:12.143 "num_base_bdevs": 3, 00:08:12.143 "num_base_bdevs_discovered": 2, 00:08:12.143 "num_base_bdevs_operational": 2, 00:08:12.143 "base_bdevs_list": [ 00:08:12.143 { 00:08:12.143 "name": null, 00:08:12.143 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:12.143 "is_configured": false, 00:08:12.143 "data_offset": 0, 00:08:12.143 "data_size": 63488 00:08:12.143 }, 00:08:12.143 { 00:08:12.143 "name": "BaseBdev2", 00:08:12.143 "uuid": "7e5fc226-529c-4d24-a23f-5d20295b9312", 00:08:12.143 "is_configured": true, 00:08:12.143 "data_offset": 2048, 00:08:12.143 "data_size": 63488 00:08:12.143 }, 00:08:12.143 { 00:08:12.143 "name": "BaseBdev3", 00:08:12.143 "uuid": "1bd93639-1ca8-4ae3-b67e-cf4ca54f6750", 00:08:12.143 "is_configured": true, 00:08:12.143 "data_offset": 2048, 00:08:12.143 "data_size": 63488 00:08:12.143 } 00:08:12.143 ] 00:08:12.143 }' 00:08:12.143 23:25:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:12.143 23:25:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:12.714 23:25:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:12.714 23:25:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:12.714 23:25:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:12.714 23:25:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:12.714 23:25:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:12.714 23:25:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:12.714 23:25:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:12.714 23:25:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:12.714 23:25:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:12.714 23:25:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:12.714 23:25:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:12.714 23:25:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:12.714 [2024-09-30 23:25:52.415297] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:12.714 23:25:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:12.714 23:25:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:12.714 23:25:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:12.714 23:25:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:12.714 23:25:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:12.714 23:25:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:12.714 23:25:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:12.714 23:25:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:12.714 23:25:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:12.714 23:25:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:12.714 23:25:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:08:12.714 23:25:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:12.714 23:25:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:12.714 [2024-09-30 23:25:52.482473] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:12.714 [2024-09-30 23:25:52.482524] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline 00:08:12.714 23:25:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:12.714 23:25:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:12.714 23:25:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:12.714 23:25:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:12.714 23:25:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:12.714 23:25:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:12.714 23:25:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:12.714 23:25:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:12.714 23:25:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:12.714 23:25:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:12.714 23:25:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:08:12.714 23:25:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:08:12.714 23:25:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:12.714 23:25:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:12.714 23:25:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:12.714 23:25:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:12.714 BaseBdev2 00:08:12.714 23:25:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:12.714 23:25:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:08:12.714 23:25:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:08:12.714 23:25:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:12.714 23:25:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:08:12.714 23:25:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:12.714 23:25:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:12.714 23:25:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:12.714 23:25:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:12.714 23:25:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:12.975 23:25:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:12.975 23:25:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:12.975 23:25:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:12.975 23:25:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:12.975 [ 00:08:12.975 { 00:08:12.975 "name": "BaseBdev2", 00:08:12.975 "aliases": [ 00:08:12.975 "ecb7ed3f-4d9d-49ca-b502-82d6814f5f75" 00:08:12.975 ], 00:08:12.975 "product_name": "Malloc disk", 00:08:12.975 "block_size": 512, 00:08:12.975 "num_blocks": 65536, 00:08:12.975 "uuid": "ecb7ed3f-4d9d-49ca-b502-82d6814f5f75", 00:08:12.975 "assigned_rate_limits": { 00:08:12.975 "rw_ios_per_sec": 0, 00:08:12.975 "rw_mbytes_per_sec": 0, 00:08:12.975 "r_mbytes_per_sec": 0, 00:08:12.975 "w_mbytes_per_sec": 0 00:08:12.975 }, 00:08:12.975 "claimed": false, 00:08:12.975 "zoned": false, 00:08:12.975 "supported_io_types": { 00:08:12.975 "read": true, 00:08:12.975 "write": true, 00:08:12.975 "unmap": true, 00:08:12.975 "flush": true, 00:08:12.975 "reset": true, 00:08:12.975 "nvme_admin": false, 00:08:12.975 "nvme_io": false, 00:08:12.975 "nvme_io_md": false, 00:08:12.975 "write_zeroes": true, 00:08:12.975 "zcopy": true, 00:08:12.975 "get_zone_info": false, 00:08:12.975 "zone_management": false, 00:08:12.975 "zone_append": false, 00:08:12.975 "compare": false, 00:08:12.975 "compare_and_write": false, 00:08:12.975 "abort": true, 00:08:12.975 "seek_hole": false, 00:08:12.975 "seek_data": false, 00:08:12.975 "copy": true, 00:08:12.975 "nvme_iov_md": false 00:08:12.975 }, 00:08:12.975 "memory_domains": [ 00:08:12.975 { 00:08:12.975 "dma_device_id": "system", 00:08:12.975 "dma_device_type": 1 00:08:12.975 }, 00:08:12.975 { 00:08:12.975 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:12.975 "dma_device_type": 2 00:08:12.975 } 00:08:12.975 ], 00:08:12.975 "driver_specific": {} 00:08:12.975 } 00:08:12.975 ] 00:08:12.975 23:25:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:12.975 23:25:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:08:12.975 23:25:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:12.975 23:25:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:12.975 23:25:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:12.975 23:25:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:12.975 23:25:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:12.975 BaseBdev3 00:08:12.975 23:25:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:12.975 23:25:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:08:12.975 23:25:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:08:12.975 23:25:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:12.975 23:25:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:08:12.975 23:25:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:12.975 23:25:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:12.975 23:25:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:12.975 23:25:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:12.975 23:25:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:12.975 23:25:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:12.975 23:25:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:12.975 23:25:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:12.975 23:25:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:12.975 [ 00:08:12.975 { 00:08:12.975 "name": "BaseBdev3", 00:08:12.975 "aliases": [ 00:08:12.975 "d50982be-cc42-4430-8201-78fd0859dfa2" 00:08:12.975 ], 00:08:12.975 "product_name": "Malloc disk", 00:08:12.975 "block_size": 512, 00:08:12.975 "num_blocks": 65536, 00:08:12.975 "uuid": "d50982be-cc42-4430-8201-78fd0859dfa2", 00:08:12.975 "assigned_rate_limits": { 00:08:12.975 "rw_ios_per_sec": 0, 00:08:12.975 "rw_mbytes_per_sec": 0, 00:08:12.975 "r_mbytes_per_sec": 0, 00:08:12.975 "w_mbytes_per_sec": 0 00:08:12.975 }, 00:08:12.975 "claimed": false, 00:08:12.975 "zoned": false, 00:08:12.975 "supported_io_types": { 00:08:12.975 "read": true, 00:08:12.975 "write": true, 00:08:12.975 "unmap": true, 00:08:12.975 "flush": true, 00:08:12.975 "reset": true, 00:08:12.975 "nvme_admin": false, 00:08:12.975 "nvme_io": false, 00:08:12.975 "nvme_io_md": false, 00:08:12.975 "write_zeroes": true, 00:08:12.975 "zcopy": true, 00:08:12.975 "get_zone_info": false, 00:08:12.975 "zone_management": false, 00:08:12.975 "zone_append": false, 00:08:12.975 "compare": false, 00:08:12.975 "compare_and_write": false, 00:08:12.975 "abort": true, 00:08:12.975 "seek_hole": false, 00:08:12.975 "seek_data": false, 00:08:12.975 "copy": true, 00:08:12.975 "nvme_iov_md": false 00:08:12.975 }, 00:08:12.975 "memory_domains": [ 00:08:12.975 { 00:08:12.975 "dma_device_id": "system", 00:08:12.975 "dma_device_type": 1 00:08:12.975 }, 00:08:12.975 { 00:08:12.975 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:12.975 "dma_device_type": 2 00:08:12.975 } 00:08:12.975 ], 00:08:12.975 "driver_specific": {} 00:08:12.975 } 00:08:12.975 ] 00:08:12.975 23:25:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:12.975 23:25:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:08:12.976 23:25:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:12.976 23:25:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:12.976 23:25:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:12.976 23:25:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:12.976 23:25:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:12.976 [2024-09-30 23:25:52.653537] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:12.976 [2024-09-30 23:25:52.653667] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:12.976 [2024-09-30 23:25:52.653709] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:12.976 [2024-09-30 23:25:52.655560] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:12.976 23:25:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:12.976 23:25:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:12.976 23:25:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:12.976 23:25:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:12.976 23:25:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:12.976 23:25:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:12.976 23:25:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:12.976 23:25:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:12.976 23:25:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:12.976 23:25:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:12.976 23:25:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:12.976 23:25:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:12.976 23:25:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:12.976 23:25:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:12.976 23:25:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:12.976 23:25:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:12.976 23:25:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:12.976 "name": "Existed_Raid", 00:08:12.976 "uuid": "26ebada6-d361-41d2-a420-e748ff060e33", 00:08:12.976 "strip_size_kb": 64, 00:08:12.976 "state": "configuring", 00:08:12.976 "raid_level": "raid0", 00:08:12.976 "superblock": true, 00:08:12.976 "num_base_bdevs": 3, 00:08:12.976 "num_base_bdevs_discovered": 2, 00:08:12.976 "num_base_bdevs_operational": 3, 00:08:12.976 "base_bdevs_list": [ 00:08:12.976 { 00:08:12.976 "name": "BaseBdev1", 00:08:12.976 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:12.976 "is_configured": false, 00:08:12.976 "data_offset": 0, 00:08:12.976 "data_size": 0 00:08:12.976 }, 00:08:12.976 { 00:08:12.976 "name": "BaseBdev2", 00:08:12.976 "uuid": "ecb7ed3f-4d9d-49ca-b502-82d6814f5f75", 00:08:12.976 "is_configured": true, 00:08:12.976 "data_offset": 2048, 00:08:12.976 "data_size": 63488 00:08:12.976 }, 00:08:12.976 { 00:08:12.976 "name": "BaseBdev3", 00:08:12.976 "uuid": "d50982be-cc42-4430-8201-78fd0859dfa2", 00:08:12.976 "is_configured": true, 00:08:12.976 "data_offset": 2048, 00:08:12.976 "data_size": 63488 00:08:12.976 } 00:08:12.976 ] 00:08:12.976 }' 00:08:12.976 23:25:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:12.976 23:25:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:13.236 23:25:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:08:13.236 23:25:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:13.236 23:25:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:13.495 [2024-09-30 23:25:53.088794] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:13.495 23:25:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:13.495 23:25:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:13.495 23:25:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:13.496 23:25:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:13.496 23:25:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:13.496 23:25:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:13.496 23:25:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:13.496 23:25:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:13.496 23:25:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:13.496 23:25:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:13.496 23:25:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:13.496 23:25:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:13.496 23:25:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:13.496 23:25:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:13.496 23:25:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:13.496 23:25:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:13.496 23:25:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:13.496 "name": "Existed_Raid", 00:08:13.496 "uuid": "26ebada6-d361-41d2-a420-e748ff060e33", 00:08:13.496 "strip_size_kb": 64, 00:08:13.496 "state": "configuring", 00:08:13.496 "raid_level": "raid0", 00:08:13.496 "superblock": true, 00:08:13.496 "num_base_bdevs": 3, 00:08:13.496 "num_base_bdevs_discovered": 1, 00:08:13.496 "num_base_bdevs_operational": 3, 00:08:13.496 "base_bdevs_list": [ 00:08:13.496 { 00:08:13.496 "name": "BaseBdev1", 00:08:13.496 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:13.496 "is_configured": false, 00:08:13.496 "data_offset": 0, 00:08:13.496 "data_size": 0 00:08:13.496 }, 00:08:13.496 { 00:08:13.496 "name": null, 00:08:13.496 "uuid": "ecb7ed3f-4d9d-49ca-b502-82d6814f5f75", 00:08:13.496 "is_configured": false, 00:08:13.496 "data_offset": 0, 00:08:13.496 "data_size": 63488 00:08:13.496 }, 00:08:13.496 { 00:08:13.496 "name": "BaseBdev3", 00:08:13.496 "uuid": "d50982be-cc42-4430-8201-78fd0859dfa2", 00:08:13.496 "is_configured": true, 00:08:13.496 "data_offset": 2048, 00:08:13.496 "data_size": 63488 00:08:13.496 } 00:08:13.496 ] 00:08:13.496 }' 00:08:13.496 23:25:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:13.496 23:25:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:13.757 23:25:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:13.757 23:25:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:13.757 23:25:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:13.757 23:25:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:13.757 23:25:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:13.757 23:25:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:08:13.757 23:25:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:13.757 23:25:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:13.757 23:25:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:13.757 [2024-09-30 23:25:53.554917] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:13.757 BaseBdev1 00:08:13.757 23:25:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:13.757 23:25:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:08:13.757 23:25:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:08:13.757 23:25:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:13.757 23:25:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:08:13.757 23:25:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:13.757 23:25:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:13.757 23:25:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:13.757 23:25:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:13.757 23:25:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:13.757 23:25:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:13.757 23:25:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:13.757 23:25:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:13.757 23:25:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:13.757 [ 00:08:13.757 { 00:08:13.757 "name": "BaseBdev1", 00:08:13.757 "aliases": [ 00:08:13.757 "4255782a-8523-4c46-8c0b-7c00bb3422de" 00:08:13.757 ], 00:08:13.757 "product_name": "Malloc disk", 00:08:13.757 "block_size": 512, 00:08:13.757 "num_blocks": 65536, 00:08:13.757 "uuid": "4255782a-8523-4c46-8c0b-7c00bb3422de", 00:08:13.757 "assigned_rate_limits": { 00:08:13.757 "rw_ios_per_sec": 0, 00:08:13.757 "rw_mbytes_per_sec": 0, 00:08:13.757 "r_mbytes_per_sec": 0, 00:08:13.757 "w_mbytes_per_sec": 0 00:08:13.757 }, 00:08:13.757 "claimed": true, 00:08:13.757 "claim_type": "exclusive_write", 00:08:13.757 "zoned": false, 00:08:13.757 "supported_io_types": { 00:08:13.757 "read": true, 00:08:13.757 "write": true, 00:08:13.757 "unmap": true, 00:08:13.757 "flush": true, 00:08:13.757 "reset": true, 00:08:13.757 "nvme_admin": false, 00:08:13.757 "nvme_io": false, 00:08:13.757 "nvme_io_md": false, 00:08:13.757 "write_zeroes": true, 00:08:13.757 "zcopy": true, 00:08:13.757 "get_zone_info": false, 00:08:13.757 "zone_management": false, 00:08:13.757 "zone_append": false, 00:08:13.757 "compare": false, 00:08:13.757 "compare_and_write": false, 00:08:13.757 "abort": true, 00:08:13.757 "seek_hole": false, 00:08:13.757 "seek_data": false, 00:08:13.757 "copy": true, 00:08:13.757 "nvme_iov_md": false 00:08:13.757 }, 00:08:13.757 "memory_domains": [ 00:08:13.757 { 00:08:13.757 "dma_device_id": "system", 00:08:13.757 "dma_device_type": 1 00:08:13.757 }, 00:08:13.757 { 00:08:13.757 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:13.757 "dma_device_type": 2 00:08:13.757 } 00:08:13.757 ], 00:08:13.757 "driver_specific": {} 00:08:13.757 } 00:08:13.757 ] 00:08:13.757 23:25:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:13.757 23:25:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:08:13.757 23:25:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:13.757 23:25:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:13.757 23:25:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:13.757 23:25:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:13.757 23:25:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:13.757 23:25:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:13.757 23:25:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:13.757 23:25:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:13.757 23:25:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:13.757 23:25:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:13.757 23:25:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:13.757 23:25:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:13.757 23:25:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:13.757 23:25:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:14.018 23:25:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:14.018 23:25:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:14.018 "name": "Existed_Raid", 00:08:14.018 "uuid": "26ebada6-d361-41d2-a420-e748ff060e33", 00:08:14.018 "strip_size_kb": 64, 00:08:14.018 "state": "configuring", 00:08:14.018 "raid_level": "raid0", 00:08:14.018 "superblock": true, 00:08:14.018 "num_base_bdevs": 3, 00:08:14.018 "num_base_bdevs_discovered": 2, 00:08:14.018 "num_base_bdevs_operational": 3, 00:08:14.018 "base_bdevs_list": [ 00:08:14.018 { 00:08:14.018 "name": "BaseBdev1", 00:08:14.018 "uuid": "4255782a-8523-4c46-8c0b-7c00bb3422de", 00:08:14.018 "is_configured": true, 00:08:14.018 "data_offset": 2048, 00:08:14.018 "data_size": 63488 00:08:14.018 }, 00:08:14.018 { 00:08:14.018 "name": null, 00:08:14.018 "uuid": "ecb7ed3f-4d9d-49ca-b502-82d6814f5f75", 00:08:14.018 "is_configured": false, 00:08:14.018 "data_offset": 0, 00:08:14.018 "data_size": 63488 00:08:14.018 }, 00:08:14.018 { 00:08:14.018 "name": "BaseBdev3", 00:08:14.018 "uuid": "d50982be-cc42-4430-8201-78fd0859dfa2", 00:08:14.018 "is_configured": true, 00:08:14.018 "data_offset": 2048, 00:08:14.018 "data_size": 63488 00:08:14.018 } 00:08:14.018 ] 00:08:14.018 }' 00:08:14.018 23:25:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:14.018 23:25:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:14.279 23:25:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:14.279 23:25:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:14.279 23:25:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:14.279 23:25:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:14.279 23:25:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:14.279 23:25:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:08:14.279 23:25:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:08:14.279 23:25:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:14.279 23:25:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:14.279 [2024-09-30 23:25:54.102057] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:14.279 23:25:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:14.279 23:25:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:14.279 23:25:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:14.279 23:25:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:14.279 23:25:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:14.279 23:25:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:14.279 23:25:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:14.279 23:25:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:14.279 23:25:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:14.279 23:25:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:14.279 23:25:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:14.279 23:25:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:14.279 23:25:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:14.279 23:25:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:14.279 23:25:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:14.279 23:25:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:14.539 23:25:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:14.539 "name": "Existed_Raid", 00:08:14.539 "uuid": "26ebada6-d361-41d2-a420-e748ff060e33", 00:08:14.539 "strip_size_kb": 64, 00:08:14.539 "state": "configuring", 00:08:14.539 "raid_level": "raid0", 00:08:14.539 "superblock": true, 00:08:14.539 "num_base_bdevs": 3, 00:08:14.539 "num_base_bdevs_discovered": 1, 00:08:14.539 "num_base_bdevs_operational": 3, 00:08:14.539 "base_bdevs_list": [ 00:08:14.539 { 00:08:14.539 "name": "BaseBdev1", 00:08:14.539 "uuid": "4255782a-8523-4c46-8c0b-7c00bb3422de", 00:08:14.539 "is_configured": true, 00:08:14.539 "data_offset": 2048, 00:08:14.539 "data_size": 63488 00:08:14.539 }, 00:08:14.539 { 00:08:14.539 "name": null, 00:08:14.539 "uuid": "ecb7ed3f-4d9d-49ca-b502-82d6814f5f75", 00:08:14.539 "is_configured": false, 00:08:14.539 "data_offset": 0, 00:08:14.539 "data_size": 63488 00:08:14.539 }, 00:08:14.539 { 00:08:14.539 "name": null, 00:08:14.539 "uuid": "d50982be-cc42-4430-8201-78fd0859dfa2", 00:08:14.539 "is_configured": false, 00:08:14.539 "data_offset": 0, 00:08:14.539 "data_size": 63488 00:08:14.539 } 00:08:14.539 ] 00:08:14.539 }' 00:08:14.539 23:25:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:14.539 23:25:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:14.799 23:25:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:14.799 23:25:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:14.799 23:25:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:14.799 23:25:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:14.799 23:25:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:14.799 23:25:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:08:14.799 23:25:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:08:14.799 23:25:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:14.799 23:25:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:14.799 [2024-09-30 23:25:54.577275] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:14.799 23:25:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:14.799 23:25:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:14.799 23:25:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:14.799 23:25:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:14.799 23:25:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:14.799 23:25:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:14.799 23:25:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:14.799 23:25:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:14.799 23:25:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:14.800 23:25:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:14.800 23:25:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:14.800 23:25:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:14.800 23:25:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:14.800 23:25:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:14.800 23:25:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:14.800 23:25:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:14.800 23:25:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:14.800 "name": "Existed_Raid", 00:08:14.800 "uuid": "26ebada6-d361-41d2-a420-e748ff060e33", 00:08:14.800 "strip_size_kb": 64, 00:08:14.800 "state": "configuring", 00:08:14.800 "raid_level": "raid0", 00:08:14.800 "superblock": true, 00:08:14.800 "num_base_bdevs": 3, 00:08:14.800 "num_base_bdevs_discovered": 2, 00:08:14.800 "num_base_bdevs_operational": 3, 00:08:14.800 "base_bdevs_list": [ 00:08:14.800 { 00:08:14.800 "name": "BaseBdev1", 00:08:14.800 "uuid": "4255782a-8523-4c46-8c0b-7c00bb3422de", 00:08:14.800 "is_configured": true, 00:08:14.800 "data_offset": 2048, 00:08:14.800 "data_size": 63488 00:08:14.800 }, 00:08:14.800 { 00:08:14.800 "name": null, 00:08:14.800 "uuid": "ecb7ed3f-4d9d-49ca-b502-82d6814f5f75", 00:08:14.800 "is_configured": false, 00:08:14.800 "data_offset": 0, 00:08:14.800 "data_size": 63488 00:08:14.800 }, 00:08:14.800 { 00:08:14.800 "name": "BaseBdev3", 00:08:14.800 "uuid": "d50982be-cc42-4430-8201-78fd0859dfa2", 00:08:14.800 "is_configured": true, 00:08:14.800 "data_offset": 2048, 00:08:14.800 "data_size": 63488 00:08:14.800 } 00:08:14.800 ] 00:08:14.800 }' 00:08:14.800 23:25:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:14.800 23:25:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:15.370 23:25:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:15.370 23:25:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:15.370 23:25:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:15.370 23:25:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:15.370 23:25:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:15.370 23:25:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:08:15.370 23:25:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:15.370 23:25:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:15.370 23:25:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:15.370 [2024-09-30 23:25:55.020542] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:15.370 23:25:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:15.370 23:25:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:15.370 23:25:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:15.370 23:25:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:15.370 23:25:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:15.370 23:25:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:15.370 23:25:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:15.370 23:25:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:15.370 23:25:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:15.370 23:25:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:15.370 23:25:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:15.370 23:25:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:15.370 23:25:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:15.370 23:25:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:15.370 23:25:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:15.370 23:25:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:15.370 23:25:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:15.370 "name": "Existed_Raid", 00:08:15.370 "uuid": "26ebada6-d361-41d2-a420-e748ff060e33", 00:08:15.370 "strip_size_kb": 64, 00:08:15.370 "state": "configuring", 00:08:15.370 "raid_level": "raid0", 00:08:15.370 "superblock": true, 00:08:15.370 "num_base_bdevs": 3, 00:08:15.370 "num_base_bdevs_discovered": 1, 00:08:15.370 "num_base_bdevs_operational": 3, 00:08:15.370 "base_bdevs_list": [ 00:08:15.370 { 00:08:15.370 "name": null, 00:08:15.370 "uuid": "4255782a-8523-4c46-8c0b-7c00bb3422de", 00:08:15.370 "is_configured": false, 00:08:15.370 "data_offset": 0, 00:08:15.371 "data_size": 63488 00:08:15.371 }, 00:08:15.371 { 00:08:15.371 "name": null, 00:08:15.371 "uuid": "ecb7ed3f-4d9d-49ca-b502-82d6814f5f75", 00:08:15.371 "is_configured": false, 00:08:15.371 "data_offset": 0, 00:08:15.371 "data_size": 63488 00:08:15.371 }, 00:08:15.371 { 00:08:15.371 "name": "BaseBdev3", 00:08:15.371 "uuid": "d50982be-cc42-4430-8201-78fd0859dfa2", 00:08:15.371 "is_configured": true, 00:08:15.371 "data_offset": 2048, 00:08:15.371 "data_size": 63488 00:08:15.371 } 00:08:15.371 ] 00:08:15.371 }' 00:08:15.371 23:25:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:15.371 23:25:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:15.630 23:25:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:15.630 23:25:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:15.630 23:25:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:15.630 23:25:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:15.630 23:25:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:15.890 23:25:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:08:15.890 23:25:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:08:15.890 23:25:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:15.890 23:25:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:15.890 [2024-09-30 23:25:55.502328] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:15.890 23:25:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:15.890 23:25:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:15.890 23:25:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:15.890 23:25:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:15.890 23:25:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:15.890 23:25:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:15.890 23:25:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:15.890 23:25:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:15.890 23:25:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:15.890 23:25:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:15.890 23:25:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:15.890 23:25:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:15.891 23:25:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:15.891 23:25:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:15.891 23:25:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:15.891 23:25:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:15.891 23:25:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:15.891 "name": "Existed_Raid", 00:08:15.891 "uuid": "26ebada6-d361-41d2-a420-e748ff060e33", 00:08:15.891 "strip_size_kb": 64, 00:08:15.891 "state": "configuring", 00:08:15.891 "raid_level": "raid0", 00:08:15.891 "superblock": true, 00:08:15.891 "num_base_bdevs": 3, 00:08:15.891 "num_base_bdevs_discovered": 2, 00:08:15.891 "num_base_bdevs_operational": 3, 00:08:15.891 "base_bdevs_list": [ 00:08:15.891 { 00:08:15.891 "name": null, 00:08:15.891 "uuid": "4255782a-8523-4c46-8c0b-7c00bb3422de", 00:08:15.891 "is_configured": false, 00:08:15.891 "data_offset": 0, 00:08:15.891 "data_size": 63488 00:08:15.891 }, 00:08:15.891 { 00:08:15.891 "name": "BaseBdev2", 00:08:15.891 "uuid": "ecb7ed3f-4d9d-49ca-b502-82d6814f5f75", 00:08:15.891 "is_configured": true, 00:08:15.891 "data_offset": 2048, 00:08:15.891 "data_size": 63488 00:08:15.891 }, 00:08:15.891 { 00:08:15.891 "name": "BaseBdev3", 00:08:15.891 "uuid": "d50982be-cc42-4430-8201-78fd0859dfa2", 00:08:15.891 "is_configured": true, 00:08:15.891 "data_offset": 2048, 00:08:15.891 "data_size": 63488 00:08:15.891 } 00:08:15.891 ] 00:08:15.891 }' 00:08:15.891 23:25:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:15.891 23:25:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:16.150 23:25:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:16.150 23:25:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:16.150 23:25:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:16.150 23:25:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:16.150 23:25:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:16.150 23:25:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:08:16.150 23:25:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:16.150 23:25:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:16.150 23:25:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:16.150 23:25:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:08:16.410 23:25:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:16.410 23:25:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 4255782a-8523-4c46-8c0b-7c00bb3422de 00:08:16.410 23:25:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:16.410 23:25:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:16.410 NewBaseBdev 00:08:16.410 [2024-09-30 23:25:56.048364] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:08:16.410 [2024-09-30 23:25:56.048529] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:08:16.410 [2024-09-30 23:25:56.048545] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:16.410 [2024-09-30 23:25:56.048784] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:08:16.410 [2024-09-30 23:25:56.048915] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:08:16.410 [2024-09-30 23:25:56.048925] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006d00 00:08:16.410 [2024-09-30 23:25:56.049026] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:16.410 23:25:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:16.410 23:25:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:08:16.410 23:25:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:08:16.410 23:25:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:16.410 23:25:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:08:16.410 23:25:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:16.410 23:25:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:16.410 23:25:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:16.410 23:25:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:16.410 23:25:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:16.410 23:25:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:16.410 23:25:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:08:16.410 23:25:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:16.410 23:25:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:16.410 [ 00:08:16.410 { 00:08:16.410 "name": "NewBaseBdev", 00:08:16.410 "aliases": [ 00:08:16.410 "4255782a-8523-4c46-8c0b-7c00bb3422de" 00:08:16.410 ], 00:08:16.410 "product_name": "Malloc disk", 00:08:16.410 "block_size": 512, 00:08:16.410 "num_blocks": 65536, 00:08:16.410 "uuid": "4255782a-8523-4c46-8c0b-7c00bb3422de", 00:08:16.410 "assigned_rate_limits": { 00:08:16.410 "rw_ios_per_sec": 0, 00:08:16.410 "rw_mbytes_per_sec": 0, 00:08:16.410 "r_mbytes_per_sec": 0, 00:08:16.410 "w_mbytes_per_sec": 0 00:08:16.410 }, 00:08:16.410 "claimed": true, 00:08:16.410 "claim_type": "exclusive_write", 00:08:16.410 "zoned": false, 00:08:16.410 "supported_io_types": { 00:08:16.410 "read": true, 00:08:16.410 "write": true, 00:08:16.410 "unmap": true, 00:08:16.410 "flush": true, 00:08:16.410 "reset": true, 00:08:16.410 "nvme_admin": false, 00:08:16.410 "nvme_io": false, 00:08:16.410 "nvme_io_md": false, 00:08:16.410 "write_zeroes": true, 00:08:16.410 "zcopy": true, 00:08:16.410 "get_zone_info": false, 00:08:16.410 "zone_management": false, 00:08:16.410 "zone_append": false, 00:08:16.410 "compare": false, 00:08:16.410 "compare_and_write": false, 00:08:16.410 "abort": true, 00:08:16.410 "seek_hole": false, 00:08:16.410 "seek_data": false, 00:08:16.410 "copy": true, 00:08:16.410 "nvme_iov_md": false 00:08:16.410 }, 00:08:16.410 "memory_domains": [ 00:08:16.410 { 00:08:16.410 "dma_device_id": "system", 00:08:16.410 "dma_device_type": 1 00:08:16.410 }, 00:08:16.410 { 00:08:16.410 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:16.410 "dma_device_type": 2 00:08:16.410 } 00:08:16.410 ], 00:08:16.410 "driver_specific": {} 00:08:16.410 } 00:08:16.410 ] 00:08:16.410 23:25:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:16.410 23:25:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:08:16.410 23:25:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:08:16.411 23:25:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:16.411 23:25:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:16.411 23:25:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:16.411 23:25:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:16.411 23:25:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:16.411 23:25:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:16.411 23:25:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:16.411 23:25:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:16.411 23:25:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:16.411 23:25:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:16.411 23:25:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:16.411 23:25:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:16.411 23:25:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:16.411 23:25:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:16.411 23:25:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:16.411 "name": "Existed_Raid", 00:08:16.411 "uuid": "26ebada6-d361-41d2-a420-e748ff060e33", 00:08:16.411 "strip_size_kb": 64, 00:08:16.411 "state": "online", 00:08:16.411 "raid_level": "raid0", 00:08:16.411 "superblock": true, 00:08:16.411 "num_base_bdevs": 3, 00:08:16.411 "num_base_bdevs_discovered": 3, 00:08:16.411 "num_base_bdevs_operational": 3, 00:08:16.411 "base_bdevs_list": [ 00:08:16.411 { 00:08:16.411 "name": "NewBaseBdev", 00:08:16.411 "uuid": "4255782a-8523-4c46-8c0b-7c00bb3422de", 00:08:16.411 "is_configured": true, 00:08:16.411 "data_offset": 2048, 00:08:16.411 "data_size": 63488 00:08:16.411 }, 00:08:16.411 { 00:08:16.411 "name": "BaseBdev2", 00:08:16.411 "uuid": "ecb7ed3f-4d9d-49ca-b502-82d6814f5f75", 00:08:16.411 "is_configured": true, 00:08:16.411 "data_offset": 2048, 00:08:16.411 "data_size": 63488 00:08:16.411 }, 00:08:16.411 { 00:08:16.411 "name": "BaseBdev3", 00:08:16.411 "uuid": "d50982be-cc42-4430-8201-78fd0859dfa2", 00:08:16.411 "is_configured": true, 00:08:16.411 "data_offset": 2048, 00:08:16.411 "data_size": 63488 00:08:16.411 } 00:08:16.411 ] 00:08:16.411 }' 00:08:16.411 23:25:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:16.411 23:25:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:16.671 23:25:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:08:16.671 23:25:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:16.671 23:25:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:16.671 23:25:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:16.671 23:25:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:08:16.671 23:25:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:16.671 23:25:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:16.671 23:25:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:16.671 23:25:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:16.671 23:25:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:16.671 [2024-09-30 23:25:56.515839] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:16.939 23:25:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:16.939 23:25:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:16.939 "name": "Existed_Raid", 00:08:16.939 "aliases": [ 00:08:16.939 "26ebada6-d361-41d2-a420-e748ff060e33" 00:08:16.939 ], 00:08:16.939 "product_name": "Raid Volume", 00:08:16.939 "block_size": 512, 00:08:16.939 "num_blocks": 190464, 00:08:16.939 "uuid": "26ebada6-d361-41d2-a420-e748ff060e33", 00:08:16.939 "assigned_rate_limits": { 00:08:16.939 "rw_ios_per_sec": 0, 00:08:16.939 "rw_mbytes_per_sec": 0, 00:08:16.939 "r_mbytes_per_sec": 0, 00:08:16.939 "w_mbytes_per_sec": 0 00:08:16.939 }, 00:08:16.939 "claimed": false, 00:08:16.939 "zoned": false, 00:08:16.939 "supported_io_types": { 00:08:16.939 "read": true, 00:08:16.939 "write": true, 00:08:16.939 "unmap": true, 00:08:16.939 "flush": true, 00:08:16.939 "reset": true, 00:08:16.939 "nvme_admin": false, 00:08:16.939 "nvme_io": false, 00:08:16.939 "nvme_io_md": false, 00:08:16.939 "write_zeroes": true, 00:08:16.939 "zcopy": false, 00:08:16.939 "get_zone_info": false, 00:08:16.939 "zone_management": false, 00:08:16.939 "zone_append": false, 00:08:16.939 "compare": false, 00:08:16.939 "compare_and_write": false, 00:08:16.939 "abort": false, 00:08:16.939 "seek_hole": false, 00:08:16.939 "seek_data": false, 00:08:16.939 "copy": false, 00:08:16.939 "nvme_iov_md": false 00:08:16.939 }, 00:08:16.939 "memory_domains": [ 00:08:16.939 { 00:08:16.939 "dma_device_id": "system", 00:08:16.939 "dma_device_type": 1 00:08:16.939 }, 00:08:16.939 { 00:08:16.939 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:16.939 "dma_device_type": 2 00:08:16.939 }, 00:08:16.939 { 00:08:16.939 "dma_device_id": "system", 00:08:16.939 "dma_device_type": 1 00:08:16.939 }, 00:08:16.939 { 00:08:16.939 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:16.939 "dma_device_type": 2 00:08:16.939 }, 00:08:16.939 { 00:08:16.939 "dma_device_id": "system", 00:08:16.939 "dma_device_type": 1 00:08:16.939 }, 00:08:16.939 { 00:08:16.939 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:16.939 "dma_device_type": 2 00:08:16.939 } 00:08:16.939 ], 00:08:16.939 "driver_specific": { 00:08:16.939 "raid": { 00:08:16.939 "uuid": "26ebada6-d361-41d2-a420-e748ff060e33", 00:08:16.939 "strip_size_kb": 64, 00:08:16.939 "state": "online", 00:08:16.939 "raid_level": "raid0", 00:08:16.939 "superblock": true, 00:08:16.939 "num_base_bdevs": 3, 00:08:16.939 "num_base_bdevs_discovered": 3, 00:08:16.939 "num_base_bdevs_operational": 3, 00:08:16.939 "base_bdevs_list": [ 00:08:16.939 { 00:08:16.939 "name": "NewBaseBdev", 00:08:16.939 "uuid": "4255782a-8523-4c46-8c0b-7c00bb3422de", 00:08:16.939 "is_configured": true, 00:08:16.939 "data_offset": 2048, 00:08:16.939 "data_size": 63488 00:08:16.939 }, 00:08:16.939 { 00:08:16.939 "name": "BaseBdev2", 00:08:16.939 "uuid": "ecb7ed3f-4d9d-49ca-b502-82d6814f5f75", 00:08:16.939 "is_configured": true, 00:08:16.939 "data_offset": 2048, 00:08:16.939 "data_size": 63488 00:08:16.939 }, 00:08:16.939 { 00:08:16.939 "name": "BaseBdev3", 00:08:16.939 "uuid": "d50982be-cc42-4430-8201-78fd0859dfa2", 00:08:16.939 "is_configured": true, 00:08:16.939 "data_offset": 2048, 00:08:16.939 "data_size": 63488 00:08:16.939 } 00:08:16.939 ] 00:08:16.939 } 00:08:16.939 } 00:08:16.939 }' 00:08:16.939 23:25:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:16.939 23:25:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:08:16.939 BaseBdev2 00:08:16.939 BaseBdev3' 00:08:16.939 23:25:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:16.939 23:25:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:16.939 23:25:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:16.940 23:25:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:16.940 23:25:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:08:16.940 23:25:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:16.940 23:25:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:16.940 23:25:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:16.940 23:25:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:16.940 23:25:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:16.940 23:25:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:16.940 23:25:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:16.940 23:25:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:16.940 23:25:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:16.940 23:25:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:16.940 23:25:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:16.940 23:25:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:16.940 23:25:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:16.940 23:25:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:16.940 23:25:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:16.940 23:25:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:16.940 23:25:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:16.940 23:25:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:16.940 23:25:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:16.940 23:25:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:16.940 23:25:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:16.940 23:25:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:16.940 23:25:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:16.940 23:25:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:16.940 [2024-09-30 23:25:56.779133] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:16.940 [2024-09-30 23:25:56.779162] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:16.940 [2024-09-30 23:25:56.779229] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:16.940 [2024-09-30 23:25:56.779282] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:16.940 [2024-09-30 23:25:56.779302] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name Existed_Raid, state offline 00:08:16.940 23:25:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:16.940 23:25:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 75688 00:08:16.940 23:25:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 75688 ']' 00:08:16.940 23:25:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 75688 00:08:17.212 23:25:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:08:17.212 23:25:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:17.212 23:25:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 75688 00:08:17.212 killing process with pid 75688 00:08:17.212 23:25:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:17.212 23:25:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:17.212 23:25:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 75688' 00:08:17.212 23:25:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 75688 00:08:17.212 [2024-09-30 23:25:56.830537] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:17.212 23:25:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 75688 00:08:17.212 [2024-09-30 23:25:56.862305] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:17.472 23:25:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:08:17.472 00:08:17.472 real 0m8.701s 00:08:17.472 user 0m14.745s 00:08:17.472 sys 0m1.875s 00:08:17.472 23:25:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:17.472 ************************************ 00:08:17.472 END TEST raid_state_function_test_sb 00:08:17.472 ************************************ 00:08:17.472 23:25:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:17.472 23:25:57 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 3 00:08:17.472 23:25:57 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:08:17.472 23:25:57 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:17.472 23:25:57 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:17.472 ************************************ 00:08:17.472 START TEST raid_superblock_test 00:08:17.472 ************************************ 00:08:17.472 23:25:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test raid0 3 00:08:17.472 23:25:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:08:17.472 23:25:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:08:17.472 23:25:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:08:17.472 23:25:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:08:17.472 23:25:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:08:17.472 23:25:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:08:17.472 23:25:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:08:17.472 23:25:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:08:17.472 23:25:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:08:17.472 23:25:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:08:17.472 23:25:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:08:17.473 23:25:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:08:17.473 23:25:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:08:17.473 23:25:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:08:17.473 23:25:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:08:17.473 23:25:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:08:17.473 23:25:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=76292 00:08:17.473 23:25:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:08:17.473 23:25:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 76292 00:08:17.473 23:25:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 76292 ']' 00:08:17.473 23:25:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:17.473 23:25:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:17.473 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:17.473 23:25:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:17.473 23:25:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:17.473 23:25:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.473 [2024-09-30 23:25:57.276540] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:08:17.473 [2024-09-30 23:25:57.276670] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76292 ] 00:08:17.731 [2024-09-30 23:25:57.439390] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:17.731 [2024-09-30 23:25:57.485890] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:17.731 [2024-09-30 23:25:57.528001] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:17.731 [2024-09-30 23:25:57.528042] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:18.297 23:25:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:18.297 23:25:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:08:18.297 23:25:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:08:18.297 23:25:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:18.297 23:25:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:08:18.297 23:25:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:08:18.297 23:25:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:08:18.297 23:25:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:18.297 23:25:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:18.297 23:25:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:18.297 23:25:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:08:18.297 23:25:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:18.297 23:25:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.297 malloc1 00:08:18.297 23:25:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:18.297 23:25:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:18.297 23:25:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:18.297 23:25:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.297 [2024-09-30 23:25:58.130038] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:18.297 [2024-09-30 23:25:58.130209] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:18.297 [2024-09-30 23:25:58.130253] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:08:18.297 [2024-09-30 23:25:58.130293] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:18.297 [2024-09-30 23:25:58.132336] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:18.297 [2024-09-30 23:25:58.132413] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:18.297 pt1 00:08:18.297 23:25:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:18.297 23:25:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:18.297 23:25:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:18.297 23:25:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:08:18.297 23:25:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:08:18.297 23:25:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:08:18.297 23:25:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:18.297 23:25:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:18.297 23:25:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:18.297 23:25:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:08:18.297 23:25:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:18.297 23:25:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.555 malloc2 00:08:18.555 23:25:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:18.556 23:25:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:18.556 23:25:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:18.556 23:25:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.556 [2024-09-30 23:25:58.173255] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:18.556 [2024-09-30 23:25:58.173416] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:18.556 [2024-09-30 23:25:58.173461] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:08:18.556 [2024-09-30 23:25:58.173508] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:18.556 [2024-09-30 23:25:58.176304] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:18.556 [2024-09-30 23:25:58.176409] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:18.556 pt2 00:08:18.556 23:25:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:18.556 23:25:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:18.556 23:25:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:18.556 23:25:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:08:18.556 23:25:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:08:18.556 23:25:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:08:18.556 23:25:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:18.556 23:25:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:18.556 23:25:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:18.556 23:25:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:08:18.556 23:25:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:18.556 23:25:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.556 malloc3 00:08:18.556 23:25:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:18.556 23:25:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:08:18.556 23:25:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:18.556 23:25:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.556 [2024-09-30 23:25:58.201832] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:08:18.556 [2024-09-30 23:25:58.201979] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:18.556 [2024-09-30 23:25:58.202016] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:08:18.556 [2024-09-30 23:25:58.202060] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:18.556 [2024-09-30 23:25:58.204061] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:18.556 [2024-09-30 23:25:58.204139] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:08:18.556 pt3 00:08:18.556 23:25:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:18.556 23:25:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:18.556 23:25:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:18.556 23:25:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:08:18.556 23:25:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:18.556 23:25:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.556 [2024-09-30 23:25:58.213873] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:18.556 [2024-09-30 23:25:58.215752] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:18.556 [2024-09-30 23:25:58.215870] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:08:18.556 [2024-09-30 23:25:58.216032] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:08:18.556 [2024-09-30 23:25:58.216077] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:18.556 [2024-09-30 23:25:58.216343] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:08:18.556 [2024-09-30 23:25:58.216512] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:08:18.556 [2024-09-30 23:25:58.216557] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:08:18.556 [2024-09-30 23:25:58.216723] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:18.556 23:25:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:18.556 23:25:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:08:18.556 23:25:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:18.556 23:25:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:18.556 23:25:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:18.556 23:25:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:18.556 23:25:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:18.556 23:25:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:18.556 23:25:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:18.556 23:25:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:18.556 23:25:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:18.556 23:25:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:18.556 23:25:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:18.556 23:25:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:18.556 23:25:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.556 23:25:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:18.556 23:25:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:18.556 "name": "raid_bdev1", 00:08:18.556 "uuid": "1decc354-5995-455a-a3bd-01d10d6a8a63", 00:08:18.556 "strip_size_kb": 64, 00:08:18.556 "state": "online", 00:08:18.556 "raid_level": "raid0", 00:08:18.556 "superblock": true, 00:08:18.556 "num_base_bdevs": 3, 00:08:18.556 "num_base_bdevs_discovered": 3, 00:08:18.556 "num_base_bdevs_operational": 3, 00:08:18.556 "base_bdevs_list": [ 00:08:18.556 { 00:08:18.556 "name": "pt1", 00:08:18.556 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:18.556 "is_configured": true, 00:08:18.556 "data_offset": 2048, 00:08:18.556 "data_size": 63488 00:08:18.556 }, 00:08:18.556 { 00:08:18.556 "name": "pt2", 00:08:18.556 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:18.556 "is_configured": true, 00:08:18.556 "data_offset": 2048, 00:08:18.556 "data_size": 63488 00:08:18.556 }, 00:08:18.556 { 00:08:18.556 "name": "pt3", 00:08:18.556 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:18.556 "is_configured": true, 00:08:18.556 "data_offset": 2048, 00:08:18.556 "data_size": 63488 00:08:18.556 } 00:08:18.556 ] 00:08:18.556 }' 00:08:18.556 23:25:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:18.556 23:25:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.814 23:25:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:08:18.814 23:25:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:08:18.814 23:25:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:18.814 23:25:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:18.814 23:25:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:18.814 23:25:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:18.814 23:25:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:18.814 23:25:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:18.814 23:25:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:18.814 23:25:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.814 [2024-09-30 23:25:58.657386] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:19.074 23:25:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:19.074 23:25:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:19.074 "name": "raid_bdev1", 00:08:19.074 "aliases": [ 00:08:19.074 "1decc354-5995-455a-a3bd-01d10d6a8a63" 00:08:19.074 ], 00:08:19.074 "product_name": "Raid Volume", 00:08:19.074 "block_size": 512, 00:08:19.074 "num_blocks": 190464, 00:08:19.074 "uuid": "1decc354-5995-455a-a3bd-01d10d6a8a63", 00:08:19.074 "assigned_rate_limits": { 00:08:19.074 "rw_ios_per_sec": 0, 00:08:19.074 "rw_mbytes_per_sec": 0, 00:08:19.074 "r_mbytes_per_sec": 0, 00:08:19.074 "w_mbytes_per_sec": 0 00:08:19.074 }, 00:08:19.074 "claimed": false, 00:08:19.074 "zoned": false, 00:08:19.074 "supported_io_types": { 00:08:19.074 "read": true, 00:08:19.074 "write": true, 00:08:19.074 "unmap": true, 00:08:19.074 "flush": true, 00:08:19.074 "reset": true, 00:08:19.074 "nvme_admin": false, 00:08:19.074 "nvme_io": false, 00:08:19.074 "nvme_io_md": false, 00:08:19.074 "write_zeroes": true, 00:08:19.074 "zcopy": false, 00:08:19.074 "get_zone_info": false, 00:08:19.074 "zone_management": false, 00:08:19.074 "zone_append": false, 00:08:19.074 "compare": false, 00:08:19.074 "compare_and_write": false, 00:08:19.074 "abort": false, 00:08:19.074 "seek_hole": false, 00:08:19.074 "seek_data": false, 00:08:19.074 "copy": false, 00:08:19.074 "nvme_iov_md": false 00:08:19.074 }, 00:08:19.074 "memory_domains": [ 00:08:19.074 { 00:08:19.074 "dma_device_id": "system", 00:08:19.074 "dma_device_type": 1 00:08:19.074 }, 00:08:19.074 { 00:08:19.074 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:19.074 "dma_device_type": 2 00:08:19.074 }, 00:08:19.074 { 00:08:19.074 "dma_device_id": "system", 00:08:19.074 "dma_device_type": 1 00:08:19.074 }, 00:08:19.074 { 00:08:19.074 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:19.074 "dma_device_type": 2 00:08:19.074 }, 00:08:19.074 { 00:08:19.074 "dma_device_id": "system", 00:08:19.074 "dma_device_type": 1 00:08:19.074 }, 00:08:19.074 { 00:08:19.074 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:19.074 "dma_device_type": 2 00:08:19.074 } 00:08:19.074 ], 00:08:19.074 "driver_specific": { 00:08:19.074 "raid": { 00:08:19.074 "uuid": "1decc354-5995-455a-a3bd-01d10d6a8a63", 00:08:19.074 "strip_size_kb": 64, 00:08:19.074 "state": "online", 00:08:19.074 "raid_level": "raid0", 00:08:19.074 "superblock": true, 00:08:19.074 "num_base_bdevs": 3, 00:08:19.074 "num_base_bdevs_discovered": 3, 00:08:19.074 "num_base_bdevs_operational": 3, 00:08:19.074 "base_bdevs_list": [ 00:08:19.074 { 00:08:19.074 "name": "pt1", 00:08:19.074 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:19.074 "is_configured": true, 00:08:19.074 "data_offset": 2048, 00:08:19.074 "data_size": 63488 00:08:19.074 }, 00:08:19.074 { 00:08:19.074 "name": "pt2", 00:08:19.074 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:19.074 "is_configured": true, 00:08:19.074 "data_offset": 2048, 00:08:19.074 "data_size": 63488 00:08:19.074 }, 00:08:19.074 { 00:08:19.074 "name": "pt3", 00:08:19.074 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:19.074 "is_configured": true, 00:08:19.074 "data_offset": 2048, 00:08:19.074 "data_size": 63488 00:08:19.074 } 00:08:19.074 ] 00:08:19.074 } 00:08:19.074 } 00:08:19.074 }' 00:08:19.074 23:25:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:19.074 23:25:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:08:19.074 pt2 00:08:19.074 pt3' 00:08:19.074 23:25:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:19.074 23:25:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:19.074 23:25:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:19.074 23:25:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:08:19.074 23:25:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:19.074 23:25:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.074 23:25:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:19.074 23:25:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:19.074 23:25:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:19.074 23:25:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:19.074 23:25:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:19.075 23:25:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:08:19.075 23:25:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:19.075 23:25:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:19.075 23:25:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.075 23:25:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:19.075 23:25:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:19.075 23:25:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:19.075 23:25:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:19.075 23:25:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:08:19.075 23:25:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:19.075 23:25:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:19.075 23:25:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.075 23:25:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:19.336 23:25:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:19.336 23:25:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:19.336 23:25:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:19.336 23:25:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:08:19.336 23:25:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:19.336 23:25:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.336 [2024-09-30 23:25:58.940788] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:19.336 23:25:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:19.336 23:25:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=1decc354-5995-455a-a3bd-01d10d6a8a63 00:08:19.336 23:25:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 1decc354-5995-455a-a3bd-01d10d6a8a63 ']' 00:08:19.336 23:25:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:19.336 23:25:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:19.336 23:25:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.336 [2024-09-30 23:25:58.984448] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:19.336 [2024-09-30 23:25:58.984521] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:19.336 [2024-09-30 23:25:58.984643] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:19.336 [2024-09-30 23:25:58.984741] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:19.336 [2024-09-30 23:25:58.984791] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:08:19.336 23:25:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:19.336 23:25:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:08:19.336 23:25:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:19.336 23:25:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:19.336 23:25:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.336 23:25:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:19.336 23:25:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:08:19.336 23:25:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:08:19.336 23:25:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:19.336 23:25:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:08:19.336 23:25:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:19.336 23:25:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.336 23:25:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:19.336 23:25:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:19.336 23:25:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:08:19.336 23:25:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:19.336 23:25:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.337 23:25:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:19.337 23:25:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:19.337 23:25:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:08:19.337 23:25:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:19.337 23:25:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.337 23:25:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:19.337 23:25:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:08:19.337 23:25:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:19.337 23:25:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.337 23:25:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:08:19.337 23:25:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:19.337 23:25:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:08:19.337 23:25:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:08:19.337 23:25:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:08:19.337 23:25:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:08:19.337 23:25:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:08:19.337 23:25:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:19.337 23:25:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:08:19.337 23:25:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:19.337 23:25:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:08:19.337 23:25:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:19.337 23:25:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.337 [2024-09-30 23:25:59.124223] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:08:19.337 [2024-09-30 23:25:59.126128] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:08:19.337 [2024-09-30 23:25:59.126210] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:08:19.337 [2024-09-30 23:25:59.126279] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:08:19.337 [2024-09-30 23:25:59.126415] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:08:19.337 [2024-09-30 23:25:59.126477] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:08:19.337 [2024-09-30 23:25:59.126519] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:19.337 [2024-09-30 23:25:59.126573] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state configuring 00:08:19.337 request: 00:08:19.337 { 00:08:19.337 "name": "raid_bdev1", 00:08:19.337 "raid_level": "raid0", 00:08:19.337 "base_bdevs": [ 00:08:19.337 "malloc1", 00:08:19.337 "malloc2", 00:08:19.337 "malloc3" 00:08:19.337 ], 00:08:19.337 "strip_size_kb": 64, 00:08:19.337 "superblock": false, 00:08:19.337 "method": "bdev_raid_create", 00:08:19.337 "req_id": 1 00:08:19.337 } 00:08:19.337 Got JSON-RPC error response 00:08:19.337 response: 00:08:19.337 { 00:08:19.337 "code": -17, 00:08:19.337 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:08:19.337 } 00:08:19.337 23:25:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:08:19.337 23:25:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:08:19.337 23:25:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:19.337 23:25:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:19.337 23:25:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:19.337 23:25:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:19.337 23:25:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:19.337 23:25:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.337 23:25:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:08:19.337 23:25:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:19.337 23:25:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:08:19.337 23:25:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:08:19.337 23:25:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:19.337 23:25:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:19.337 23:25:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.597 [2024-09-30 23:25:59.192078] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:19.597 [2024-09-30 23:25:59.192128] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:19.597 [2024-09-30 23:25:59.192144] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:08:19.597 [2024-09-30 23:25:59.192154] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:19.597 [2024-09-30 23:25:59.194252] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:19.597 [2024-09-30 23:25:59.194292] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:19.597 [2024-09-30 23:25:59.194361] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:08:19.597 [2024-09-30 23:25:59.194399] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:19.597 pt1 00:08:19.597 23:25:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:19.597 23:25:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:08:19.597 23:25:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:19.597 23:25:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:19.597 23:25:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:19.597 23:25:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:19.597 23:25:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:19.597 23:25:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:19.597 23:25:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:19.597 23:25:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:19.597 23:25:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:19.597 23:25:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:19.597 23:25:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:19.597 23:25:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:19.597 23:25:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.597 23:25:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:19.597 23:25:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:19.597 "name": "raid_bdev1", 00:08:19.597 "uuid": "1decc354-5995-455a-a3bd-01d10d6a8a63", 00:08:19.597 "strip_size_kb": 64, 00:08:19.598 "state": "configuring", 00:08:19.598 "raid_level": "raid0", 00:08:19.598 "superblock": true, 00:08:19.598 "num_base_bdevs": 3, 00:08:19.598 "num_base_bdevs_discovered": 1, 00:08:19.598 "num_base_bdevs_operational": 3, 00:08:19.598 "base_bdevs_list": [ 00:08:19.598 { 00:08:19.598 "name": "pt1", 00:08:19.598 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:19.598 "is_configured": true, 00:08:19.598 "data_offset": 2048, 00:08:19.598 "data_size": 63488 00:08:19.598 }, 00:08:19.598 { 00:08:19.598 "name": null, 00:08:19.598 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:19.598 "is_configured": false, 00:08:19.598 "data_offset": 2048, 00:08:19.598 "data_size": 63488 00:08:19.598 }, 00:08:19.598 { 00:08:19.598 "name": null, 00:08:19.598 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:19.598 "is_configured": false, 00:08:19.598 "data_offset": 2048, 00:08:19.598 "data_size": 63488 00:08:19.598 } 00:08:19.598 ] 00:08:19.598 }' 00:08:19.598 23:25:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:19.598 23:25:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.857 23:25:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:08:19.857 23:25:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:19.857 23:25:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:19.857 23:25:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.857 [2024-09-30 23:25:59.583461] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:19.857 [2024-09-30 23:25:59.583564] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:19.857 [2024-09-30 23:25:59.583597] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:08:19.857 [2024-09-30 23:25:59.583628] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:19.857 [2024-09-30 23:25:59.583979] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:19.857 [2024-09-30 23:25:59.584041] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:19.857 [2024-09-30 23:25:59.584123] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:08:19.857 [2024-09-30 23:25:59.584170] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:19.857 pt2 00:08:19.857 23:25:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:19.857 23:25:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:08:19.857 23:25:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:19.857 23:25:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.857 [2024-09-30 23:25:59.591469] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:08:19.857 23:25:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:19.857 23:25:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:08:19.857 23:25:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:19.857 23:25:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:19.857 23:25:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:19.857 23:25:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:19.857 23:25:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:19.857 23:25:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:19.857 23:25:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:19.857 23:25:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:19.857 23:25:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:19.857 23:25:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:19.857 23:25:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:19.857 23:25:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:19.857 23:25:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.857 23:25:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:19.857 23:25:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:19.857 "name": "raid_bdev1", 00:08:19.857 "uuid": "1decc354-5995-455a-a3bd-01d10d6a8a63", 00:08:19.857 "strip_size_kb": 64, 00:08:19.857 "state": "configuring", 00:08:19.857 "raid_level": "raid0", 00:08:19.857 "superblock": true, 00:08:19.857 "num_base_bdevs": 3, 00:08:19.857 "num_base_bdevs_discovered": 1, 00:08:19.857 "num_base_bdevs_operational": 3, 00:08:19.857 "base_bdevs_list": [ 00:08:19.857 { 00:08:19.857 "name": "pt1", 00:08:19.857 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:19.857 "is_configured": true, 00:08:19.857 "data_offset": 2048, 00:08:19.857 "data_size": 63488 00:08:19.857 }, 00:08:19.857 { 00:08:19.857 "name": null, 00:08:19.857 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:19.857 "is_configured": false, 00:08:19.857 "data_offset": 0, 00:08:19.858 "data_size": 63488 00:08:19.858 }, 00:08:19.858 { 00:08:19.858 "name": null, 00:08:19.858 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:19.858 "is_configured": false, 00:08:19.858 "data_offset": 2048, 00:08:19.858 "data_size": 63488 00:08:19.858 } 00:08:19.858 ] 00:08:19.858 }' 00:08:19.858 23:25:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:19.858 23:25:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.427 23:26:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:08:20.427 23:26:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:20.427 23:26:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:20.427 23:26:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:20.427 23:26:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.427 [2024-09-30 23:26:00.018850] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:20.427 [2024-09-30 23:26:00.018915] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:20.427 [2024-09-30 23:26:00.018933] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:08:20.427 [2024-09-30 23:26:00.018941] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:20.427 [2024-09-30 23:26:00.019300] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:20.428 [2024-09-30 23:26:00.019316] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:20.428 [2024-09-30 23:26:00.019380] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:08:20.428 [2024-09-30 23:26:00.019398] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:20.428 pt2 00:08:20.428 23:26:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:20.428 23:26:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:08:20.428 23:26:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:20.428 23:26:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:08:20.428 23:26:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:20.428 23:26:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.428 [2024-09-30 23:26:00.030829] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:08:20.428 [2024-09-30 23:26:00.030883] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:20.428 [2024-09-30 23:26:00.030900] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:08:20.428 [2024-09-30 23:26:00.030908] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:20.428 [2024-09-30 23:26:00.031219] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:20.428 [2024-09-30 23:26:00.031233] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:08:20.428 [2024-09-30 23:26:00.031284] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:08:20.428 [2024-09-30 23:26:00.031299] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:08:20.428 [2024-09-30 23:26:00.031376] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:08:20.428 [2024-09-30 23:26:00.031384] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:20.428 [2024-09-30 23:26:00.031599] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:08:20.428 [2024-09-30 23:26:00.031695] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:08:20.428 [2024-09-30 23:26:00.031706] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006980 00:08:20.428 [2024-09-30 23:26:00.031793] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:20.428 pt3 00:08:20.428 23:26:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:20.428 23:26:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:08:20.428 23:26:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:20.428 23:26:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:08:20.428 23:26:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:20.428 23:26:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:20.428 23:26:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:20.428 23:26:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:20.428 23:26:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:20.428 23:26:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:20.428 23:26:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:20.428 23:26:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:20.428 23:26:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:20.428 23:26:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:20.428 23:26:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:20.428 23:26:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:20.428 23:26:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.428 23:26:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:20.428 23:26:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:20.428 "name": "raid_bdev1", 00:08:20.428 "uuid": "1decc354-5995-455a-a3bd-01d10d6a8a63", 00:08:20.428 "strip_size_kb": 64, 00:08:20.428 "state": "online", 00:08:20.428 "raid_level": "raid0", 00:08:20.428 "superblock": true, 00:08:20.428 "num_base_bdevs": 3, 00:08:20.428 "num_base_bdevs_discovered": 3, 00:08:20.428 "num_base_bdevs_operational": 3, 00:08:20.428 "base_bdevs_list": [ 00:08:20.428 { 00:08:20.428 "name": "pt1", 00:08:20.428 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:20.428 "is_configured": true, 00:08:20.428 "data_offset": 2048, 00:08:20.428 "data_size": 63488 00:08:20.428 }, 00:08:20.428 { 00:08:20.428 "name": "pt2", 00:08:20.428 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:20.428 "is_configured": true, 00:08:20.428 "data_offset": 2048, 00:08:20.428 "data_size": 63488 00:08:20.428 }, 00:08:20.428 { 00:08:20.428 "name": "pt3", 00:08:20.428 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:20.428 "is_configured": true, 00:08:20.428 "data_offset": 2048, 00:08:20.428 "data_size": 63488 00:08:20.428 } 00:08:20.428 ] 00:08:20.428 }' 00:08:20.428 23:26:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:20.428 23:26:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.688 23:26:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:08:20.688 23:26:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:08:20.688 23:26:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:20.688 23:26:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:20.688 23:26:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:20.688 23:26:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:20.688 23:26:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:20.688 23:26:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:20.688 23:26:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:20.688 23:26:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.688 [2024-09-30 23:26:00.510349] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:20.688 23:26:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:20.688 23:26:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:20.688 "name": "raid_bdev1", 00:08:20.688 "aliases": [ 00:08:20.688 "1decc354-5995-455a-a3bd-01d10d6a8a63" 00:08:20.688 ], 00:08:20.688 "product_name": "Raid Volume", 00:08:20.688 "block_size": 512, 00:08:20.688 "num_blocks": 190464, 00:08:20.688 "uuid": "1decc354-5995-455a-a3bd-01d10d6a8a63", 00:08:20.688 "assigned_rate_limits": { 00:08:20.688 "rw_ios_per_sec": 0, 00:08:20.688 "rw_mbytes_per_sec": 0, 00:08:20.688 "r_mbytes_per_sec": 0, 00:08:20.688 "w_mbytes_per_sec": 0 00:08:20.688 }, 00:08:20.688 "claimed": false, 00:08:20.688 "zoned": false, 00:08:20.688 "supported_io_types": { 00:08:20.688 "read": true, 00:08:20.688 "write": true, 00:08:20.688 "unmap": true, 00:08:20.688 "flush": true, 00:08:20.688 "reset": true, 00:08:20.688 "nvme_admin": false, 00:08:20.688 "nvme_io": false, 00:08:20.688 "nvme_io_md": false, 00:08:20.688 "write_zeroes": true, 00:08:20.688 "zcopy": false, 00:08:20.688 "get_zone_info": false, 00:08:20.688 "zone_management": false, 00:08:20.688 "zone_append": false, 00:08:20.688 "compare": false, 00:08:20.688 "compare_and_write": false, 00:08:20.688 "abort": false, 00:08:20.688 "seek_hole": false, 00:08:20.688 "seek_data": false, 00:08:20.688 "copy": false, 00:08:20.688 "nvme_iov_md": false 00:08:20.688 }, 00:08:20.688 "memory_domains": [ 00:08:20.688 { 00:08:20.688 "dma_device_id": "system", 00:08:20.688 "dma_device_type": 1 00:08:20.688 }, 00:08:20.688 { 00:08:20.688 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:20.688 "dma_device_type": 2 00:08:20.688 }, 00:08:20.688 { 00:08:20.688 "dma_device_id": "system", 00:08:20.688 "dma_device_type": 1 00:08:20.688 }, 00:08:20.688 { 00:08:20.688 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:20.688 "dma_device_type": 2 00:08:20.688 }, 00:08:20.688 { 00:08:20.688 "dma_device_id": "system", 00:08:20.688 "dma_device_type": 1 00:08:20.688 }, 00:08:20.688 { 00:08:20.688 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:20.688 "dma_device_type": 2 00:08:20.688 } 00:08:20.688 ], 00:08:20.688 "driver_specific": { 00:08:20.688 "raid": { 00:08:20.688 "uuid": "1decc354-5995-455a-a3bd-01d10d6a8a63", 00:08:20.688 "strip_size_kb": 64, 00:08:20.688 "state": "online", 00:08:20.688 "raid_level": "raid0", 00:08:20.688 "superblock": true, 00:08:20.688 "num_base_bdevs": 3, 00:08:20.688 "num_base_bdevs_discovered": 3, 00:08:20.688 "num_base_bdevs_operational": 3, 00:08:20.688 "base_bdevs_list": [ 00:08:20.688 { 00:08:20.688 "name": "pt1", 00:08:20.688 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:20.688 "is_configured": true, 00:08:20.688 "data_offset": 2048, 00:08:20.688 "data_size": 63488 00:08:20.688 }, 00:08:20.688 { 00:08:20.688 "name": "pt2", 00:08:20.689 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:20.689 "is_configured": true, 00:08:20.689 "data_offset": 2048, 00:08:20.689 "data_size": 63488 00:08:20.689 }, 00:08:20.689 { 00:08:20.689 "name": "pt3", 00:08:20.689 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:20.689 "is_configured": true, 00:08:20.689 "data_offset": 2048, 00:08:20.689 "data_size": 63488 00:08:20.689 } 00:08:20.689 ] 00:08:20.689 } 00:08:20.689 } 00:08:20.689 }' 00:08:20.948 23:26:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:20.949 23:26:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:08:20.949 pt2 00:08:20.949 pt3' 00:08:20.949 23:26:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:20.949 23:26:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:20.949 23:26:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:20.949 23:26:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:20.949 23:26:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:08:20.949 23:26:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:20.949 23:26:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.949 23:26:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:20.949 23:26:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:20.949 23:26:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:20.949 23:26:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:20.949 23:26:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:08:20.949 23:26:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:20.949 23:26:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.949 23:26:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:20.949 23:26:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:20.949 23:26:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:20.949 23:26:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:20.949 23:26:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:20.949 23:26:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:08:20.949 23:26:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:20.949 23:26:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:20.949 23:26:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.949 23:26:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:20.949 23:26:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:20.949 23:26:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:20.949 23:26:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:20.949 23:26:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:08:20.949 23:26:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:20.949 23:26:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.949 [2024-09-30 23:26:00.761890] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:20.949 23:26:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:21.208 23:26:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 1decc354-5995-455a-a3bd-01d10d6a8a63 '!=' 1decc354-5995-455a-a3bd-01d10d6a8a63 ']' 00:08:21.208 23:26:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:08:21.208 23:26:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:21.208 23:26:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:21.208 23:26:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 76292 00:08:21.208 23:26:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 76292 ']' 00:08:21.208 23:26:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 76292 00:08:21.208 23:26:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:08:21.208 23:26:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:21.208 23:26:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 76292 00:08:21.208 23:26:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:21.208 23:26:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:21.208 23:26:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 76292' 00:08:21.208 killing process with pid 76292 00:08:21.208 23:26:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 76292 00:08:21.208 [2024-09-30 23:26:00.847179] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:21.208 [2024-09-30 23:26:00.847317] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:21.208 23:26:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 76292 00:08:21.208 [2024-09-30 23:26:00.847418] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:21.208 [2024-09-30 23:26:00.847430] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name raid_bdev1, state offline 00:08:21.208 [2024-09-30 23:26:00.880419] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:21.468 23:26:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:08:21.468 00:08:21.468 real 0m3.939s 00:08:21.468 user 0m6.213s 00:08:21.468 sys 0m0.849s 00:08:21.468 23:26:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:21.468 ************************************ 00:08:21.468 END TEST raid_superblock_test 00:08:21.468 ************************************ 00:08:21.468 23:26:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:21.468 23:26:01 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 3 read 00:08:21.468 23:26:01 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:08:21.468 23:26:01 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:21.468 23:26:01 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:21.468 ************************************ 00:08:21.468 START TEST raid_read_error_test 00:08:21.468 ************************************ 00:08:21.468 23:26:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid0 3 read 00:08:21.468 23:26:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:08:21.468 23:26:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:08:21.468 23:26:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:08:21.468 23:26:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:08:21.468 23:26:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:21.468 23:26:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:08:21.468 23:26:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:21.468 23:26:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:21.468 23:26:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:08:21.468 23:26:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:21.468 23:26:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:21.468 23:26:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:08:21.468 23:26:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:21.468 23:26:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:21.468 23:26:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:08:21.468 23:26:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:08:21.468 23:26:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:08:21.468 23:26:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:08:21.468 23:26:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:08:21.468 23:26:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:08:21.468 23:26:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:08:21.468 23:26:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:08:21.468 23:26:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:08:21.468 23:26:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:08:21.468 23:26:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:08:21.468 23:26:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.KKBFh1NqJt 00:08:21.468 23:26:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=76534 00:08:21.468 23:26:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:21.468 23:26:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 76534 00:08:21.468 23:26:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 76534 ']' 00:08:21.468 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:21.468 23:26:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:21.468 23:26:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:21.468 23:26:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:21.468 23:26:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:21.468 23:26:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:21.468 [2024-09-30 23:26:01.302937] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:08:21.468 [2024-09-30 23:26:01.303053] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76534 ] 00:08:21.728 [2024-09-30 23:26:01.464363] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:21.728 [2024-09-30 23:26:01.507687] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:21.728 [2024-09-30 23:26:01.548925] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:21.728 [2024-09-30 23:26:01.548959] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:22.297 23:26:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:22.297 23:26:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:08:22.297 23:26:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:22.297 23:26:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:22.297 23:26:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:22.297 23:26:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.297 BaseBdev1_malloc 00:08:22.297 23:26:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:22.297 23:26:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:08:22.297 23:26:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:22.297 23:26:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.298 true 00:08:22.298 23:26:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:22.298 23:26:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:22.298 23:26:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:22.298 23:26:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.558 [2024-09-30 23:26:02.154785] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:22.558 [2024-09-30 23:26:02.154837] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:22.558 [2024-09-30 23:26:02.154865] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:08:22.558 [2024-09-30 23:26:02.154875] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:22.558 [2024-09-30 23:26:02.157074] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:22.558 [2024-09-30 23:26:02.157104] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:22.558 BaseBdev1 00:08:22.558 23:26:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:22.558 23:26:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:22.558 23:26:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:22.558 23:26:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:22.558 23:26:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.558 BaseBdev2_malloc 00:08:22.558 23:26:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:22.558 23:26:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:08:22.558 23:26:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:22.558 23:26:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.558 true 00:08:22.558 23:26:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:22.558 23:26:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:22.558 23:26:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:22.558 23:26:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.558 [2024-09-30 23:26:02.203401] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:22.558 [2024-09-30 23:26:02.203519] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:22.558 [2024-09-30 23:26:02.203542] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:08:22.558 [2024-09-30 23:26:02.203551] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:22.558 [2024-09-30 23:26:02.205644] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:22.558 [2024-09-30 23:26:02.205682] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:22.558 BaseBdev2 00:08:22.558 23:26:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:22.558 23:26:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:22.558 23:26:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:08:22.558 23:26:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:22.558 23:26:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.558 BaseBdev3_malloc 00:08:22.558 23:26:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:22.558 23:26:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:08:22.558 23:26:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:22.558 23:26:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.558 true 00:08:22.558 23:26:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:22.558 23:26:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:08:22.558 23:26:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:22.558 23:26:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.558 [2024-09-30 23:26:02.232179] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:08:22.558 [2024-09-30 23:26:02.232291] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:22.558 [2024-09-30 23:26:02.232315] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:08:22.558 [2024-09-30 23:26:02.232324] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:22.558 [2024-09-30 23:26:02.234411] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:22.558 [2024-09-30 23:26:02.234446] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:08:22.558 BaseBdev3 00:08:22.558 23:26:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:22.558 23:26:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:08:22.558 23:26:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:22.558 23:26:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.558 [2024-09-30 23:26:02.244218] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:22.558 [2024-09-30 23:26:02.246023] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:22.558 [2024-09-30 23:26:02.246155] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:22.558 [2024-09-30 23:26:02.246333] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:08:22.558 [2024-09-30 23:26:02.246349] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:22.558 [2024-09-30 23:26:02.246588] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:08:22.558 [2024-09-30 23:26:02.246752] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:08:22.558 [2024-09-30 23:26:02.246764] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006d00 00:08:22.558 [2024-09-30 23:26:02.246909] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:22.558 23:26:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:22.558 23:26:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:08:22.558 23:26:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:22.558 23:26:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:22.558 23:26:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:22.558 23:26:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:22.558 23:26:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:22.558 23:26:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:22.558 23:26:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:22.558 23:26:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:22.558 23:26:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:22.558 23:26:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:22.558 23:26:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:22.558 23:26:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:22.558 23:26:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.558 23:26:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:22.558 23:26:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:22.558 "name": "raid_bdev1", 00:08:22.558 "uuid": "62633e21-e2ad-4f26-a4f0-068bfa3b8ef6", 00:08:22.558 "strip_size_kb": 64, 00:08:22.558 "state": "online", 00:08:22.558 "raid_level": "raid0", 00:08:22.558 "superblock": true, 00:08:22.558 "num_base_bdevs": 3, 00:08:22.558 "num_base_bdevs_discovered": 3, 00:08:22.558 "num_base_bdevs_operational": 3, 00:08:22.558 "base_bdevs_list": [ 00:08:22.558 { 00:08:22.558 "name": "BaseBdev1", 00:08:22.558 "uuid": "dd60adb4-acf3-5f05-b47a-b3411097aaf4", 00:08:22.558 "is_configured": true, 00:08:22.558 "data_offset": 2048, 00:08:22.558 "data_size": 63488 00:08:22.558 }, 00:08:22.558 { 00:08:22.558 "name": "BaseBdev2", 00:08:22.558 "uuid": "f731da4e-560b-5076-ba1c-76dc9e9f9cfc", 00:08:22.558 "is_configured": true, 00:08:22.558 "data_offset": 2048, 00:08:22.558 "data_size": 63488 00:08:22.558 }, 00:08:22.558 { 00:08:22.559 "name": "BaseBdev3", 00:08:22.559 "uuid": "54bc4896-adec-5cf8-88c1-dbd3eddd1562", 00:08:22.559 "is_configured": true, 00:08:22.559 "data_offset": 2048, 00:08:22.559 "data_size": 63488 00:08:22.559 } 00:08:22.559 ] 00:08:22.559 }' 00:08:22.559 23:26:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:22.559 23:26:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.128 23:26:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:08:23.128 23:26:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:08:23.128 [2024-09-30 23:26:02.791758] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:08:24.065 23:26:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:08:24.065 23:26:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:24.065 23:26:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.065 23:26:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:24.065 23:26:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:08:24.065 23:26:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:08:24.065 23:26:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:08:24.065 23:26:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:08:24.065 23:26:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:24.065 23:26:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:24.065 23:26:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:24.065 23:26:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:24.065 23:26:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:24.065 23:26:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:24.065 23:26:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:24.065 23:26:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:24.065 23:26:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:24.065 23:26:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:24.065 23:26:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:24.065 23:26:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:24.065 23:26:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.065 23:26:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:24.065 23:26:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:24.065 "name": "raid_bdev1", 00:08:24.065 "uuid": "62633e21-e2ad-4f26-a4f0-068bfa3b8ef6", 00:08:24.065 "strip_size_kb": 64, 00:08:24.065 "state": "online", 00:08:24.065 "raid_level": "raid0", 00:08:24.065 "superblock": true, 00:08:24.065 "num_base_bdevs": 3, 00:08:24.065 "num_base_bdevs_discovered": 3, 00:08:24.065 "num_base_bdevs_operational": 3, 00:08:24.065 "base_bdevs_list": [ 00:08:24.065 { 00:08:24.065 "name": "BaseBdev1", 00:08:24.065 "uuid": "dd60adb4-acf3-5f05-b47a-b3411097aaf4", 00:08:24.065 "is_configured": true, 00:08:24.065 "data_offset": 2048, 00:08:24.065 "data_size": 63488 00:08:24.065 }, 00:08:24.065 { 00:08:24.065 "name": "BaseBdev2", 00:08:24.065 "uuid": "f731da4e-560b-5076-ba1c-76dc9e9f9cfc", 00:08:24.065 "is_configured": true, 00:08:24.065 "data_offset": 2048, 00:08:24.065 "data_size": 63488 00:08:24.065 }, 00:08:24.065 { 00:08:24.065 "name": "BaseBdev3", 00:08:24.065 "uuid": "54bc4896-adec-5cf8-88c1-dbd3eddd1562", 00:08:24.065 "is_configured": true, 00:08:24.065 "data_offset": 2048, 00:08:24.065 "data_size": 63488 00:08:24.065 } 00:08:24.065 ] 00:08:24.065 }' 00:08:24.065 23:26:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:24.065 23:26:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.324 23:26:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:24.324 23:26:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:24.324 23:26:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.324 [2024-09-30 23:26:04.147480] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:24.324 [2024-09-30 23:26:04.147605] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:24.324 [2024-09-30 23:26:04.150069] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:24.324 [2024-09-30 23:26:04.150185] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:24.324 [2024-09-30 23:26:04.150239] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:24.324 [2024-09-30 23:26:04.150283] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name raid_bdev1, state offline 00:08:24.324 { 00:08:24.324 "results": [ 00:08:24.324 { 00:08:24.324 "job": "raid_bdev1", 00:08:24.324 "core_mask": "0x1", 00:08:24.324 "workload": "randrw", 00:08:24.324 "percentage": 50, 00:08:24.324 "status": "finished", 00:08:24.324 "queue_depth": 1, 00:08:24.324 "io_size": 131072, 00:08:24.324 "runtime": 1.356553, 00:08:24.324 "iops": 17564.37087235073, 00:08:24.324 "mibps": 2195.546359043841, 00:08:24.324 "io_failed": 1, 00:08:24.324 "io_timeout": 0, 00:08:24.324 "avg_latency_us": 78.96810401765785, 00:08:24.324 "min_latency_us": 18.33362445414847, 00:08:24.324 "max_latency_us": 1323.598253275109 00:08:24.324 } 00:08:24.324 ], 00:08:24.324 "core_count": 1 00:08:24.324 } 00:08:24.324 23:26:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:24.324 23:26:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 76534 00:08:24.324 23:26:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 76534 ']' 00:08:24.324 23:26:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 76534 00:08:24.324 23:26:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:08:24.324 23:26:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:24.324 23:26:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 76534 00:08:24.583 killing process with pid 76534 00:08:24.583 23:26:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:24.584 23:26:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:24.584 23:26:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 76534' 00:08:24.584 23:26:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 76534 00:08:24.584 [2024-09-30 23:26:04.184817] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:24.584 23:26:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 76534 00:08:24.584 [2024-09-30 23:26:04.210254] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:24.843 23:26:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.KKBFh1NqJt 00:08:24.843 23:26:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:08:24.843 23:26:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:08:24.843 23:26:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.74 00:08:24.843 23:26:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:08:24.843 ************************************ 00:08:24.843 END TEST raid_read_error_test 00:08:24.843 ************************************ 00:08:24.843 23:26:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:24.843 23:26:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:24.843 23:26:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.74 != \0\.\0\0 ]] 00:08:24.843 00:08:24.843 real 0m3.258s 00:08:24.843 user 0m4.106s 00:08:24.843 sys 0m0.539s 00:08:24.843 23:26:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:24.843 23:26:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.843 23:26:04 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 3 write 00:08:24.843 23:26:04 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:08:24.843 23:26:04 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:24.843 23:26:04 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:24.843 ************************************ 00:08:24.843 START TEST raid_write_error_test 00:08:24.843 ************************************ 00:08:24.843 23:26:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid0 3 write 00:08:24.843 23:26:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:08:24.843 23:26:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:08:24.843 23:26:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:08:24.843 23:26:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:08:24.843 23:26:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:24.843 23:26:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:08:24.843 23:26:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:24.843 23:26:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:24.843 23:26:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:08:24.843 23:26:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:24.843 23:26:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:24.843 23:26:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:08:24.843 23:26:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:24.843 23:26:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:24.843 23:26:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:08:24.843 23:26:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:08:24.843 23:26:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:08:24.843 23:26:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:08:24.843 23:26:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:08:24.843 23:26:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:08:24.843 23:26:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:08:24.843 23:26:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:08:24.843 23:26:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:08:24.843 23:26:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:08:24.843 23:26:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:08:24.843 23:26:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.EhbUH2RFel 00:08:24.843 23:26:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=76663 00:08:24.843 23:26:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:24.843 23:26:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 76663 00:08:24.843 23:26:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 76663 ']' 00:08:24.843 23:26:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:24.843 23:26:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:24.843 23:26:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:24.843 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:24.843 23:26:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:24.843 23:26:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.843 [2024-09-30 23:26:04.631215] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:08:24.843 [2024-09-30 23:26:04.631438] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76663 ] 00:08:25.102 [2024-09-30 23:26:04.789978] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:25.102 [2024-09-30 23:26:04.835547] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:25.102 [2024-09-30 23:26:04.877566] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:25.102 [2024-09-30 23:26:04.877623] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:25.672 23:26:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:25.672 23:26:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:08:25.672 23:26:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:25.672 23:26:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:25.672 23:26:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:25.672 23:26:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.672 BaseBdev1_malloc 00:08:25.672 23:26:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:25.672 23:26:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:08:25.672 23:26:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:25.672 23:26:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.672 true 00:08:25.672 23:26:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:25.672 23:26:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:25.672 23:26:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:25.672 23:26:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.672 [2024-09-30 23:26:05.487851] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:25.672 [2024-09-30 23:26:05.487930] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:25.672 [2024-09-30 23:26:05.487957] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:08:25.672 [2024-09-30 23:26:05.487966] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:25.672 [2024-09-30 23:26:05.490077] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:25.672 [2024-09-30 23:26:05.490118] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:25.672 BaseBdev1 00:08:25.672 23:26:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:25.672 23:26:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:25.672 23:26:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:25.672 23:26:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:25.672 23:26:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.672 BaseBdev2_malloc 00:08:25.672 23:26:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:25.672 23:26:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:08:25.672 23:26:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:25.672 23:26:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.932 true 00:08:25.932 23:26:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:25.932 23:26:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:25.932 23:26:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:25.932 23:26:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.932 [2024-09-30 23:26:05.538671] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:25.932 [2024-09-30 23:26:05.538824] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:25.932 [2024-09-30 23:26:05.538848] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:08:25.932 [2024-09-30 23:26:05.538869] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:25.932 [2024-09-30 23:26:05.540902] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:25.932 [2024-09-30 23:26:05.540939] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:25.932 BaseBdev2 00:08:25.932 23:26:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:25.932 23:26:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:25.932 23:26:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:08:25.932 23:26:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:25.932 23:26:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.932 BaseBdev3_malloc 00:08:25.932 23:26:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:25.932 23:26:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:08:25.932 23:26:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:25.932 23:26:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.932 true 00:08:25.932 23:26:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:25.932 23:26:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:08:25.932 23:26:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:25.932 23:26:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.932 [2024-09-30 23:26:05.579435] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:08:25.932 [2024-09-30 23:26:05.579490] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:25.932 [2024-09-30 23:26:05.579508] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:08:25.932 [2024-09-30 23:26:05.579517] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:25.932 [2024-09-30 23:26:05.581646] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:25.932 [2024-09-30 23:26:05.581686] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:08:25.932 BaseBdev3 00:08:25.932 23:26:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:25.932 23:26:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:08:25.932 23:26:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:25.932 23:26:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.932 [2024-09-30 23:26:05.591466] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:25.932 [2024-09-30 23:26:05.593330] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:25.932 [2024-09-30 23:26:05.593407] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:25.932 [2024-09-30 23:26:05.593570] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:08:25.932 [2024-09-30 23:26:05.593585] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:25.932 [2024-09-30 23:26:05.593816] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:08:25.932 [2024-09-30 23:26:05.593959] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:08:25.932 [2024-09-30 23:26:05.593970] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006d00 00:08:25.932 [2024-09-30 23:26:05.594106] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:25.932 23:26:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:25.932 23:26:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:08:25.932 23:26:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:25.932 23:26:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:25.932 23:26:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:25.932 23:26:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:25.932 23:26:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:25.932 23:26:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:25.932 23:26:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:25.932 23:26:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:25.932 23:26:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:25.932 23:26:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:25.932 23:26:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:25.933 23:26:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:25.933 23:26:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.933 23:26:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:25.933 23:26:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:25.933 "name": "raid_bdev1", 00:08:25.933 "uuid": "516dff13-b2f3-45a4-ac80-e55a2734e1ff", 00:08:25.933 "strip_size_kb": 64, 00:08:25.933 "state": "online", 00:08:25.933 "raid_level": "raid0", 00:08:25.933 "superblock": true, 00:08:25.933 "num_base_bdevs": 3, 00:08:25.933 "num_base_bdevs_discovered": 3, 00:08:25.933 "num_base_bdevs_operational": 3, 00:08:25.933 "base_bdevs_list": [ 00:08:25.933 { 00:08:25.933 "name": "BaseBdev1", 00:08:25.933 "uuid": "e8ffc440-5840-52f0-825b-f90cf3e0bef3", 00:08:25.933 "is_configured": true, 00:08:25.933 "data_offset": 2048, 00:08:25.933 "data_size": 63488 00:08:25.933 }, 00:08:25.933 { 00:08:25.933 "name": "BaseBdev2", 00:08:25.933 "uuid": "4befbbd5-10b5-550a-bd9c-612109807ac6", 00:08:25.933 "is_configured": true, 00:08:25.933 "data_offset": 2048, 00:08:25.933 "data_size": 63488 00:08:25.933 }, 00:08:25.933 { 00:08:25.933 "name": "BaseBdev3", 00:08:25.933 "uuid": "1fafe6bc-52fa-59eb-83f3-32e974ce4f1a", 00:08:25.933 "is_configured": true, 00:08:25.933 "data_offset": 2048, 00:08:25.933 "data_size": 63488 00:08:25.933 } 00:08:25.933 ] 00:08:25.933 }' 00:08:25.933 23:26:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:25.933 23:26:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.192 23:26:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:08:26.192 23:26:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:08:26.452 [2024-09-30 23:26:06.087177] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:08:27.389 23:26:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:08:27.389 23:26:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:27.389 23:26:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.389 23:26:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:27.389 23:26:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:08:27.389 23:26:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:08:27.389 23:26:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:08:27.389 23:26:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:08:27.389 23:26:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:27.389 23:26:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:27.389 23:26:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:27.389 23:26:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:27.389 23:26:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:27.389 23:26:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:27.389 23:26:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:27.389 23:26:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:27.389 23:26:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:27.389 23:26:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:27.389 23:26:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:27.389 23:26:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:27.389 23:26:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.389 23:26:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:27.389 23:26:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:27.389 "name": "raid_bdev1", 00:08:27.389 "uuid": "516dff13-b2f3-45a4-ac80-e55a2734e1ff", 00:08:27.389 "strip_size_kb": 64, 00:08:27.389 "state": "online", 00:08:27.389 "raid_level": "raid0", 00:08:27.389 "superblock": true, 00:08:27.389 "num_base_bdevs": 3, 00:08:27.389 "num_base_bdevs_discovered": 3, 00:08:27.389 "num_base_bdevs_operational": 3, 00:08:27.389 "base_bdevs_list": [ 00:08:27.389 { 00:08:27.389 "name": "BaseBdev1", 00:08:27.389 "uuid": "e8ffc440-5840-52f0-825b-f90cf3e0bef3", 00:08:27.389 "is_configured": true, 00:08:27.389 "data_offset": 2048, 00:08:27.389 "data_size": 63488 00:08:27.389 }, 00:08:27.389 { 00:08:27.389 "name": "BaseBdev2", 00:08:27.389 "uuid": "4befbbd5-10b5-550a-bd9c-612109807ac6", 00:08:27.389 "is_configured": true, 00:08:27.389 "data_offset": 2048, 00:08:27.389 "data_size": 63488 00:08:27.389 }, 00:08:27.389 { 00:08:27.389 "name": "BaseBdev3", 00:08:27.389 "uuid": "1fafe6bc-52fa-59eb-83f3-32e974ce4f1a", 00:08:27.389 "is_configured": true, 00:08:27.389 "data_offset": 2048, 00:08:27.389 "data_size": 63488 00:08:27.389 } 00:08:27.389 ] 00:08:27.389 }' 00:08:27.389 23:26:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:27.389 23:26:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.959 23:26:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:27.959 23:26:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:27.959 23:26:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.959 [2024-09-30 23:26:07.511107] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:27.959 [2024-09-30 23:26:07.511230] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:27.959 [2024-09-30 23:26:07.513837] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:27.959 [2024-09-30 23:26:07.513901] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:27.959 [2024-09-30 23:26:07.513939] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:27.959 [2024-09-30 23:26:07.513953] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name raid_bdev1, state offline 00:08:27.959 { 00:08:27.959 "results": [ 00:08:27.959 { 00:08:27.959 "job": "raid_bdev1", 00:08:27.959 "core_mask": "0x1", 00:08:27.959 "workload": "randrw", 00:08:27.959 "percentage": 50, 00:08:27.959 "status": "finished", 00:08:27.959 "queue_depth": 1, 00:08:27.959 "io_size": 131072, 00:08:27.959 "runtime": 1.424946, 00:08:27.959 "iops": 17760.67303603084, 00:08:27.959 "mibps": 2220.084129503855, 00:08:27.959 "io_failed": 1, 00:08:27.959 "io_timeout": 0, 00:08:27.959 "avg_latency_us": 78.02182539963259, 00:08:27.959 "min_latency_us": 24.034934497816593, 00:08:27.959 "max_latency_us": 1352.216593886463 00:08:27.959 } 00:08:27.959 ], 00:08:27.959 "core_count": 1 00:08:27.959 } 00:08:27.959 23:26:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:27.959 23:26:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 76663 00:08:27.959 23:26:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 76663 ']' 00:08:27.959 23:26:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 76663 00:08:27.959 23:26:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:08:27.959 23:26:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:27.959 23:26:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 76663 00:08:27.959 23:26:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:27.959 23:26:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:27.959 23:26:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 76663' 00:08:27.959 killing process with pid 76663 00:08:27.959 23:26:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 76663 00:08:27.959 [2024-09-30 23:26:07.562941] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:27.960 23:26:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 76663 00:08:27.960 [2024-09-30 23:26:07.588023] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:27.960 23:26:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.EhbUH2RFel 00:08:27.960 23:26:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:08:27.960 23:26:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:08:28.220 23:26:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.70 00:08:28.220 23:26:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:08:28.220 23:26:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:28.220 23:26:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:28.220 23:26:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.70 != \0\.\0\0 ]] 00:08:28.220 ************************************ 00:08:28.220 END TEST raid_write_error_test 00:08:28.220 ************************************ 00:08:28.220 00:08:28.220 real 0m3.283s 00:08:28.220 user 0m4.150s 00:08:28.220 sys 0m0.505s 00:08:28.220 23:26:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:28.220 23:26:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.220 23:26:07 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:08:28.220 23:26:07 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 3 false 00:08:28.220 23:26:07 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:08:28.220 23:26:07 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:28.220 23:26:07 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:28.220 ************************************ 00:08:28.220 START TEST raid_state_function_test 00:08:28.220 ************************************ 00:08:28.220 23:26:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test concat 3 false 00:08:28.220 23:26:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:08:28.220 23:26:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:08:28.220 23:26:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:08:28.220 23:26:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:28.220 23:26:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:28.220 23:26:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:28.220 23:26:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:28.220 23:26:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:28.220 23:26:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:28.220 23:26:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:28.220 23:26:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:28.220 23:26:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:28.220 23:26:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:08:28.220 23:26:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:28.220 23:26:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:28.220 23:26:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:08:28.220 23:26:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:28.220 23:26:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:28.220 23:26:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:28.220 23:26:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:28.220 23:26:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:28.220 23:26:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:08:28.220 23:26:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:08:28.220 23:26:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:08:28.220 23:26:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:08:28.220 23:26:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:08:28.220 23:26:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=76790 00:08:28.220 Process raid pid: 76790 00:08:28.220 23:26:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:28.220 23:26:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 76790' 00:08:28.220 23:26:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 76790 00:08:28.220 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:28.220 23:26:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 76790 ']' 00:08:28.220 23:26:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:28.220 23:26:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:28.220 23:26:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:28.220 23:26:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:28.220 23:26:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.220 [2024-09-30 23:26:07.991905] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:08:28.220 [2024-09-30 23:26:07.992065] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:28.480 [2024-09-30 23:26:08.154667] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:28.480 [2024-09-30 23:26:08.201075] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:28.480 [2024-09-30 23:26:08.244813] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:28.480 [2024-09-30 23:26:08.244849] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:29.047 23:26:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:29.047 23:26:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:08:29.047 23:26:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:29.047 23:26:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:29.047 23:26:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.047 [2024-09-30 23:26:08.846887] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:29.047 [2024-09-30 23:26:08.846950] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:29.047 [2024-09-30 23:26:08.846964] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:29.047 [2024-09-30 23:26:08.846975] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:29.047 [2024-09-30 23:26:08.846981] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:29.047 [2024-09-30 23:26:08.846992] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:29.047 23:26:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:29.047 23:26:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:29.047 23:26:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:29.047 23:26:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:29.048 23:26:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:29.048 23:26:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:29.048 23:26:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:29.048 23:26:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:29.048 23:26:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:29.048 23:26:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:29.048 23:26:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:29.048 23:26:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:29.048 23:26:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:29.048 23:26:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.048 23:26:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:29.048 23:26:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:29.306 23:26:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:29.306 "name": "Existed_Raid", 00:08:29.306 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:29.306 "strip_size_kb": 64, 00:08:29.306 "state": "configuring", 00:08:29.306 "raid_level": "concat", 00:08:29.306 "superblock": false, 00:08:29.306 "num_base_bdevs": 3, 00:08:29.306 "num_base_bdevs_discovered": 0, 00:08:29.306 "num_base_bdevs_operational": 3, 00:08:29.306 "base_bdevs_list": [ 00:08:29.306 { 00:08:29.306 "name": "BaseBdev1", 00:08:29.306 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:29.306 "is_configured": false, 00:08:29.306 "data_offset": 0, 00:08:29.306 "data_size": 0 00:08:29.306 }, 00:08:29.306 { 00:08:29.306 "name": "BaseBdev2", 00:08:29.306 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:29.306 "is_configured": false, 00:08:29.306 "data_offset": 0, 00:08:29.306 "data_size": 0 00:08:29.306 }, 00:08:29.306 { 00:08:29.306 "name": "BaseBdev3", 00:08:29.306 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:29.306 "is_configured": false, 00:08:29.306 "data_offset": 0, 00:08:29.306 "data_size": 0 00:08:29.306 } 00:08:29.306 ] 00:08:29.306 }' 00:08:29.306 23:26:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:29.306 23:26:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.566 23:26:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:29.566 23:26:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:29.566 23:26:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.566 [2024-09-30 23:26:09.329952] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:29.566 [2024-09-30 23:26:09.330082] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring 00:08:29.566 23:26:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:29.566 23:26:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:29.566 23:26:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:29.566 23:26:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.566 [2024-09-30 23:26:09.341960] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:29.566 [2024-09-30 23:26:09.342048] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:29.566 [2024-09-30 23:26:09.342074] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:29.566 [2024-09-30 23:26:09.342095] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:29.566 [2024-09-30 23:26:09.342113] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:29.566 [2024-09-30 23:26:09.342133] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:29.566 23:26:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:29.566 23:26:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:29.566 23:26:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:29.566 23:26:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.566 [2024-09-30 23:26:09.362585] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:29.566 BaseBdev1 00:08:29.566 23:26:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:29.566 23:26:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:29.566 23:26:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:08:29.566 23:26:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:29.566 23:26:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:08:29.566 23:26:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:29.566 23:26:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:29.566 23:26:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:29.566 23:26:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:29.566 23:26:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.566 23:26:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:29.566 23:26:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:29.566 23:26:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:29.566 23:26:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.566 [ 00:08:29.566 { 00:08:29.566 "name": "BaseBdev1", 00:08:29.566 "aliases": [ 00:08:29.566 "02c831ab-ad7a-4042-9439-fa91266336ec" 00:08:29.566 ], 00:08:29.566 "product_name": "Malloc disk", 00:08:29.566 "block_size": 512, 00:08:29.566 "num_blocks": 65536, 00:08:29.566 "uuid": "02c831ab-ad7a-4042-9439-fa91266336ec", 00:08:29.566 "assigned_rate_limits": { 00:08:29.566 "rw_ios_per_sec": 0, 00:08:29.566 "rw_mbytes_per_sec": 0, 00:08:29.566 "r_mbytes_per_sec": 0, 00:08:29.566 "w_mbytes_per_sec": 0 00:08:29.566 }, 00:08:29.566 "claimed": true, 00:08:29.566 "claim_type": "exclusive_write", 00:08:29.566 "zoned": false, 00:08:29.566 "supported_io_types": { 00:08:29.566 "read": true, 00:08:29.566 "write": true, 00:08:29.566 "unmap": true, 00:08:29.566 "flush": true, 00:08:29.566 "reset": true, 00:08:29.566 "nvme_admin": false, 00:08:29.566 "nvme_io": false, 00:08:29.566 "nvme_io_md": false, 00:08:29.566 "write_zeroes": true, 00:08:29.566 "zcopy": true, 00:08:29.566 "get_zone_info": false, 00:08:29.566 "zone_management": false, 00:08:29.566 "zone_append": false, 00:08:29.566 "compare": false, 00:08:29.566 "compare_and_write": false, 00:08:29.566 "abort": true, 00:08:29.566 "seek_hole": false, 00:08:29.566 "seek_data": false, 00:08:29.566 "copy": true, 00:08:29.566 "nvme_iov_md": false 00:08:29.566 }, 00:08:29.566 "memory_domains": [ 00:08:29.566 { 00:08:29.566 "dma_device_id": "system", 00:08:29.566 "dma_device_type": 1 00:08:29.566 }, 00:08:29.566 { 00:08:29.566 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:29.566 "dma_device_type": 2 00:08:29.566 } 00:08:29.566 ], 00:08:29.566 "driver_specific": {} 00:08:29.566 } 00:08:29.566 ] 00:08:29.566 23:26:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:29.566 23:26:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:08:29.566 23:26:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:29.566 23:26:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:29.566 23:26:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:29.566 23:26:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:29.566 23:26:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:29.566 23:26:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:29.566 23:26:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:29.566 23:26:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:29.566 23:26:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:29.566 23:26:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:29.566 23:26:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:29.566 23:26:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:29.566 23:26:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:29.566 23:26:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.825 23:26:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:29.825 23:26:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:29.825 "name": "Existed_Raid", 00:08:29.825 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:29.825 "strip_size_kb": 64, 00:08:29.825 "state": "configuring", 00:08:29.825 "raid_level": "concat", 00:08:29.825 "superblock": false, 00:08:29.825 "num_base_bdevs": 3, 00:08:29.825 "num_base_bdevs_discovered": 1, 00:08:29.825 "num_base_bdevs_operational": 3, 00:08:29.825 "base_bdevs_list": [ 00:08:29.825 { 00:08:29.825 "name": "BaseBdev1", 00:08:29.825 "uuid": "02c831ab-ad7a-4042-9439-fa91266336ec", 00:08:29.825 "is_configured": true, 00:08:29.825 "data_offset": 0, 00:08:29.825 "data_size": 65536 00:08:29.825 }, 00:08:29.825 { 00:08:29.825 "name": "BaseBdev2", 00:08:29.825 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:29.825 "is_configured": false, 00:08:29.825 "data_offset": 0, 00:08:29.825 "data_size": 0 00:08:29.825 }, 00:08:29.825 { 00:08:29.825 "name": "BaseBdev3", 00:08:29.825 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:29.825 "is_configured": false, 00:08:29.825 "data_offset": 0, 00:08:29.825 "data_size": 0 00:08:29.825 } 00:08:29.825 ] 00:08:29.825 }' 00:08:29.825 23:26:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:29.825 23:26:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.085 23:26:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:30.085 23:26:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:30.085 23:26:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.085 [2024-09-30 23:26:09.837805] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:30.085 [2024-09-30 23:26:09.837952] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring 00:08:30.085 23:26:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:30.085 23:26:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:30.085 23:26:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:30.085 23:26:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.085 [2024-09-30 23:26:09.845824] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:30.085 [2024-09-30 23:26:09.847771] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:30.085 [2024-09-30 23:26:09.847816] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:30.085 [2024-09-30 23:26:09.847825] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:30.085 [2024-09-30 23:26:09.847836] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:30.085 23:26:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:30.085 23:26:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:30.085 23:26:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:30.085 23:26:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:30.085 23:26:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:30.085 23:26:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:30.085 23:26:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:30.085 23:26:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:30.085 23:26:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:30.085 23:26:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:30.085 23:26:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:30.085 23:26:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:30.085 23:26:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:30.085 23:26:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:30.085 23:26:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:30.085 23:26:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:30.085 23:26:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.085 23:26:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:30.085 23:26:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:30.085 "name": "Existed_Raid", 00:08:30.085 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:30.085 "strip_size_kb": 64, 00:08:30.085 "state": "configuring", 00:08:30.085 "raid_level": "concat", 00:08:30.085 "superblock": false, 00:08:30.085 "num_base_bdevs": 3, 00:08:30.085 "num_base_bdevs_discovered": 1, 00:08:30.085 "num_base_bdevs_operational": 3, 00:08:30.085 "base_bdevs_list": [ 00:08:30.085 { 00:08:30.085 "name": "BaseBdev1", 00:08:30.085 "uuid": "02c831ab-ad7a-4042-9439-fa91266336ec", 00:08:30.085 "is_configured": true, 00:08:30.085 "data_offset": 0, 00:08:30.085 "data_size": 65536 00:08:30.085 }, 00:08:30.085 { 00:08:30.085 "name": "BaseBdev2", 00:08:30.085 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:30.085 "is_configured": false, 00:08:30.085 "data_offset": 0, 00:08:30.085 "data_size": 0 00:08:30.085 }, 00:08:30.085 { 00:08:30.085 "name": "BaseBdev3", 00:08:30.085 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:30.085 "is_configured": false, 00:08:30.085 "data_offset": 0, 00:08:30.085 "data_size": 0 00:08:30.085 } 00:08:30.085 ] 00:08:30.085 }' 00:08:30.085 23:26:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:30.085 23:26:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.650 23:26:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:30.650 23:26:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:30.650 23:26:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.650 [2024-09-30 23:26:10.345538] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:30.650 BaseBdev2 00:08:30.650 23:26:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:30.650 23:26:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:30.650 23:26:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:08:30.650 23:26:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:30.650 23:26:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:08:30.650 23:26:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:30.650 23:26:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:30.650 23:26:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:30.650 23:26:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:30.650 23:26:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.650 23:26:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:30.650 23:26:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:30.650 23:26:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:30.650 23:26:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.650 [ 00:08:30.650 { 00:08:30.650 "name": "BaseBdev2", 00:08:30.650 "aliases": [ 00:08:30.650 "cc2c9c7b-3e20-4561-a0f6-817ecd51d525" 00:08:30.650 ], 00:08:30.650 "product_name": "Malloc disk", 00:08:30.650 "block_size": 512, 00:08:30.650 "num_blocks": 65536, 00:08:30.650 "uuid": "cc2c9c7b-3e20-4561-a0f6-817ecd51d525", 00:08:30.650 "assigned_rate_limits": { 00:08:30.650 "rw_ios_per_sec": 0, 00:08:30.650 "rw_mbytes_per_sec": 0, 00:08:30.650 "r_mbytes_per_sec": 0, 00:08:30.650 "w_mbytes_per_sec": 0 00:08:30.650 }, 00:08:30.650 "claimed": true, 00:08:30.650 "claim_type": "exclusive_write", 00:08:30.650 "zoned": false, 00:08:30.650 "supported_io_types": { 00:08:30.650 "read": true, 00:08:30.650 "write": true, 00:08:30.650 "unmap": true, 00:08:30.650 "flush": true, 00:08:30.650 "reset": true, 00:08:30.650 "nvme_admin": false, 00:08:30.650 "nvme_io": false, 00:08:30.650 "nvme_io_md": false, 00:08:30.650 "write_zeroes": true, 00:08:30.650 "zcopy": true, 00:08:30.650 "get_zone_info": false, 00:08:30.650 "zone_management": false, 00:08:30.650 "zone_append": false, 00:08:30.650 "compare": false, 00:08:30.650 "compare_and_write": false, 00:08:30.650 "abort": true, 00:08:30.650 "seek_hole": false, 00:08:30.650 "seek_data": false, 00:08:30.650 "copy": true, 00:08:30.650 "nvme_iov_md": false 00:08:30.650 }, 00:08:30.650 "memory_domains": [ 00:08:30.650 { 00:08:30.650 "dma_device_id": "system", 00:08:30.650 "dma_device_type": 1 00:08:30.650 }, 00:08:30.650 { 00:08:30.650 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:30.650 "dma_device_type": 2 00:08:30.650 } 00:08:30.650 ], 00:08:30.650 "driver_specific": {} 00:08:30.650 } 00:08:30.650 ] 00:08:30.650 23:26:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:30.650 23:26:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:08:30.650 23:26:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:30.650 23:26:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:30.650 23:26:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:30.650 23:26:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:30.650 23:26:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:30.650 23:26:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:30.650 23:26:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:30.650 23:26:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:30.650 23:26:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:30.650 23:26:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:30.650 23:26:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:30.650 23:26:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:30.650 23:26:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:30.650 23:26:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:30.650 23:26:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:30.650 23:26:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.650 23:26:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:30.650 23:26:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:30.650 "name": "Existed_Raid", 00:08:30.650 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:30.650 "strip_size_kb": 64, 00:08:30.650 "state": "configuring", 00:08:30.650 "raid_level": "concat", 00:08:30.650 "superblock": false, 00:08:30.650 "num_base_bdevs": 3, 00:08:30.650 "num_base_bdevs_discovered": 2, 00:08:30.650 "num_base_bdevs_operational": 3, 00:08:30.650 "base_bdevs_list": [ 00:08:30.650 { 00:08:30.650 "name": "BaseBdev1", 00:08:30.650 "uuid": "02c831ab-ad7a-4042-9439-fa91266336ec", 00:08:30.650 "is_configured": true, 00:08:30.651 "data_offset": 0, 00:08:30.651 "data_size": 65536 00:08:30.651 }, 00:08:30.651 { 00:08:30.651 "name": "BaseBdev2", 00:08:30.651 "uuid": "cc2c9c7b-3e20-4561-a0f6-817ecd51d525", 00:08:30.651 "is_configured": true, 00:08:30.651 "data_offset": 0, 00:08:30.651 "data_size": 65536 00:08:30.651 }, 00:08:30.651 { 00:08:30.651 "name": "BaseBdev3", 00:08:30.651 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:30.651 "is_configured": false, 00:08:30.651 "data_offset": 0, 00:08:30.651 "data_size": 0 00:08:30.651 } 00:08:30.651 ] 00:08:30.651 }' 00:08:30.651 23:26:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:30.651 23:26:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.220 23:26:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:31.220 23:26:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:31.220 23:26:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.220 [2024-09-30 23:26:10.831770] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:31.220 [2024-09-30 23:26:10.831822] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:08:31.220 [2024-09-30 23:26:10.831834] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:08:31.220 [2024-09-30 23:26:10.832169] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:08:31.220 [2024-09-30 23:26:10.832309] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:08:31.220 [2024-09-30 23:26:10.832325] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980 00:08:31.220 [2024-09-30 23:26:10.832533] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:31.220 BaseBdev3 00:08:31.220 23:26:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:31.220 23:26:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:08:31.220 23:26:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:08:31.220 23:26:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:31.220 23:26:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:08:31.220 23:26:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:31.220 23:26:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:31.220 23:26:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:31.221 23:26:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:31.221 23:26:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.221 23:26:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:31.221 23:26:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:31.221 23:26:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:31.221 23:26:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.221 [ 00:08:31.221 { 00:08:31.221 "name": "BaseBdev3", 00:08:31.221 "aliases": [ 00:08:31.221 "8367dcff-8110-46bc-aa49-845cd042a961" 00:08:31.221 ], 00:08:31.221 "product_name": "Malloc disk", 00:08:31.221 "block_size": 512, 00:08:31.221 "num_blocks": 65536, 00:08:31.221 "uuid": "8367dcff-8110-46bc-aa49-845cd042a961", 00:08:31.221 "assigned_rate_limits": { 00:08:31.221 "rw_ios_per_sec": 0, 00:08:31.221 "rw_mbytes_per_sec": 0, 00:08:31.221 "r_mbytes_per_sec": 0, 00:08:31.221 "w_mbytes_per_sec": 0 00:08:31.221 }, 00:08:31.221 "claimed": true, 00:08:31.221 "claim_type": "exclusive_write", 00:08:31.221 "zoned": false, 00:08:31.221 "supported_io_types": { 00:08:31.221 "read": true, 00:08:31.221 "write": true, 00:08:31.221 "unmap": true, 00:08:31.221 "flush": true, 00:08:31.221 "reset": true, 00:08:31.221 "nvme_admin": false, 00:08:31.221 "nvme_io": false, 00:08:31.221 "nvme_io_md": false, 00:08:31.221 "write_zeroes": true, 00:08:31.221 "zcopy": true, 00:08:31.221 "get_zone_info": false, 00:08:31.221 "zone_management": false, 00:08:31.221 "zone_append": false, 00:08:31.221 "compare": false, 00:08:31.221 "compare_and_write": false, 00:08:31.221 "abort": true, 00:08:31.221 "seek_hole": false, 00:08:31.221 "seek_data": false, 00:08:31.221 "copy": true, 00:08:31.221 "nvme_iov_md": false 00:08:31.221 }, 00:08:31.221 "memory_domains": [ 00:08:31.221 { 00:08:31.221 "dma_device_id": "system", 00:08:31.221 "dma_device_type": 1 00:08:31.221 }, 00:08:31.221 { 00:08:31.221 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:31.221 "dma_device_type": 2 00:08:31.221 } 00:08:31.221 ], 00:08:31.221 "driver_specific": {} 00:08:31.221 } 00:08:31.221 ] 00:08:31.221 23:26:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:31.221 23:26:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:08:31.221 23:26:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:31.221 23:26:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:31.221 23:26:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:08:31.221 23:26:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:31.221 23:26:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:31.221 23:26:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:31.221 23:26:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:31.221 23:26:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:31.221 23:26:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:31.221 23:26:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:31.221 23:26:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:31.221 23:26:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:31.221 23:26:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:31.221 23:26:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:31.221 23:26:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:31.221 23:26:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.221 23:26:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:31.221 23:26:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:31.221 "name": "Existed_Raid", 00:08:31.221 "uuid": "9aad2d8e-11b8-4b36-be41-3b62f121d591", 00:08:31.221 "strip_size_kb": 64, 00:08:31.221 "state": "online", 00:08:31.221 "raid_level": "concat", 00:08:31.221 "superblock": false, 00:08:31.221 "num_base_bdevs": 3, 00:08:31.221 "num_base_bdevs_discovered": 3, 00:08:31.221 "num_base_bdevs_operational": 3, 00:08:31.221 "base_bdevs_list": [ 00:08:31.221 { 00:08:31.221 "name": "BaseBdev1", 00:08:31.221 "uuid": "02c831ab-ad7a-4042-9439-fa91266336ec", 00:08:31.221 "is_configured": true, 00:08:31.221 "data_offset": 0, 00:08:31.221 "data_size": 65536 00:08:31.221 }, 00:08:31.221 { 00:08:31.221 "name": "BaseBdev2", 00:08:31.221 "uuid": "cc2c9c7b-3e20-4561-a0f6-817ecd51d525", 00:08:31.221 "is_configured": true, 00:08:31.221 "data_offset": 0, 00:08:31.221 "data_size": 65536 00:08:31.221 }, 00:08:31.221 { 00:08:31.221 "name": "BaseBdev3", 00:08:31.221 "uuid": "8367dcff-8110-46bc-aa49-845cd042a961", 00:08:31.221 "is_configured": true, 00:08:31.221 "data_offset": 0, 00:08:31.221 "data_size": 65536 00:08:31.221 } 00:08:31.221 ] 00:08:31.221 }' 00:08:31.221 23:26:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:31.221 23:26:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.481 23:26:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:31.481 23:26:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:31.481 23:26:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:31.481 23:26:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:31.481 23:26:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:31.481 23:26:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:31.481 23:26:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:31.481 23:26:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:31.481 23:26:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.481 23:26:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:31.481 [2024-09-30 23:26:11.255371] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:31.481 23:26:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:31.481 23:26:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:31.481 "name": "Existed_Raid", 00:08:31.481 "aliases": [ 00:08:31.481 "9aad2d8e-11b8-4b36-be41-3b62f121d591" 00:08:31.481 ], 00:08:31.481 "product_name": "Raid Volume", 00:08:31.481 "block_size": 512, 00:08:31.481 "num_blocks": 196608, 00:08:31.481 "uuid": "9aad2d8e-11b8-4b36-be41-3b62f121d591", 00:08:31.481 "assigned_rate_limits": { 00:08:31.481 "rw_ios_per_sec": 0, 00:08:31.481 "rw_mbytes_per_sec": 0, 00:08:31.481 "r_mbytes_per_sec": 0, 00:08:31.481 "w_mbytes_per_sec": 0 00:08:31.481 }, 00:08:31.481 "claimed": false, 00:08:31.481 "zoned": false, 00:08:31.481 "supported_io_types": { 00:08:31.481 "read": true, 00:08:31.481 "write": true, 00:08:31.481 "unmap": true, 00:08:31.481 "flush": true, 00:08:31.481 "reset": true, 00:08:31.481 "nvme_admin": false, 00:08:31.481 "nvme_io": false, 00:08:31.481 "nvme_io_md": false, 00:08:31.481 "write_zeroes": true, 00:08:31.481 "zcopy": false, 00:08:31.481 "get_zone_info": false, 00:08:31.481 "zone_management": false, 00:08:31.481 "zone_append": false, 00:08:31.481 "compare": false, 00:08:31.481 "compare_and_write": false, 00:08:31.481 "abort": false, 00:08:31.481 "seek_hole": false, 00:08:31.481 "seek_data": false, 00:08:31.481 "copy": false, 00:08:31.481 "nvme_iov_md": false 00:08:31.481 }, 00:08:31.481 "memory_domains": [ 00:08:31.481 { 00:08:31.481 "dma_device_id": "system", 00:08:31.481 "dma_device_type": 1 00:08:31.481 }, 00:08:31.481 { 00:08:31.481 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:31.481 "dma_device_type": 2 00:08:31.481 }, 00:08:31.481 { 00:08:31.481 "dma_device_id": "system", 00:08:31.481 "dma_device_type": 1 00:08:31.481 }, 00:08:31.481 { 00:08:31.481 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:31.481 "dma_device_type": 2 00:08:31.481 }, 00:08:31.481 { 00:08:31.481 "dma_device_id": "system", 00:08:31.481 "dma_device_type": 1 00:08:31.481 }, 00:08:31.481 { 00:08:31.481 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:31.481 "dma_device_type": 2 00:08:31.481 } 00:08:31.481 ], 00:08:31.481 "driver_specific": { 00:08:31.481 "raid": { 00:08:31.481 "uuid": "9aad2d8e-11b8-4b36-be41-3b62f121d591", 00:08:31.481 "strip_size_kb": 64, 00:08:31.481 "state": "online", 00:08:31.481 "raid_level": "concat", 00:08:31.481 "superblock": false, 00:08:31.481 "num_base_bdevs": 3, 00:08:31.481 "num_base_bdevs_discovered": 3, 00:08:31.481 "num_base_bdevs_operational": 3, 00:08:31.481 "base_bdevs_list": [ 00:08:31.481 { 00:08:31.481 "name": "BaseBdev1", 00:08:31.481 "uuid": "02c831ab-ad7a-4042-9439-fa91266336ec", 00:08:31.481 "is_configured": true, 00:08:31.481 "data_offset": 0, 00:08:31.481 "data_size": 65536 00:08:31.481 }, 00:08:31.481 { 00:08:31.481 "name": "BaseBdev2", 00:08:31.481 "uuid": "cc2c9c7b-3e20-4561-a0f6-817ecd51d525", 00:08:31.481 "is_configured": true, 00:08:31.481 "data_offset": 0, 00:08:31.481 "data_size": 65536 00:08:31.481 }, 00:08:31.481 { 00:08:31.481 "name": "BaseBdev3", 00:08:31.481 "uuid": "8367dcff-8110-46bc-aa49-845cd042a961", 00:08:31.481 "is_configured": true, 00:08:31.481 "data_offset": 0, 00:08:31.481 "data_size": 65536 00:08:31.481 } 00:08:31.481 ] 00:08:31.481 } 00:08:31.481 } 00:08:31.481 }' 00:08:31.481 23:26:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:31.782 23:26:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:31.782 BaseBdev2 00:08:31.782 BaseBdev3' 00:08:31.782 23:26:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:31.782 23:26:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:31.782 23:26:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:31.782 23:26:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:31.782 23:26:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:31.782 23:26:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:31.782 23:26:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.782 23:26:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:31.782 23:26:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:31.782 23:26:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:31.782 23:26:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:31.782 23:26:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:31.782 23:26:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:31.782 23:26:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.782 23:26:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:31.782 23:26:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:31.782 23:26:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:31.782 23:26:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:31.782 23:26:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:31.782 23:26:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:31.782 23:26:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:31.782 23:26:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:31.782 23:26:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.782 23:26:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:31.782 23:26:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:31.782 23:26:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:31.782 23:26:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:31.782 23:26:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:31.782 23:26:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.782 [2024-09-30 23:26:11.554636] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:31.782 [2024-09-30 23:26:11.554714] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:31.782 [2024-09-30 23:26:11.554784] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:31.782 23:26:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:31.782 23:26:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:31.782 23:26:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:08:31.782 23:26:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:31.782 23:26:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:31.782 23:26:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:08:31.782 23:26:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:08:31.782 23:26:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:31.782 23:26:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:08:31.782 23:26:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:31.782 23:26:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:31.782 23:26:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:31.782 23:26:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:31.782 23:26:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:31.782 23:26:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:31.782 23:26:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:31.782 23:26:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:31.782 23:26:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:31.782 23:26:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:31.782 23:26:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.782 23:26:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:32.066 23:26:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:32.066 "name": "Existed_Raid", 00:08:32.066 "uuid": "9aad2d8e-11b8-4b36-be41-3b62f121d591", 00:08:32.066 "strip_size_kb": 64, 00:08:32.066 "state": "offline", 00:08:32.066 "raid_level": "concat", 00:08:32.066 "superblock": false, 00:08:32.066 "num_base_bdevs": 3, 00:08:32.066 "num_base_bdevs_discovered": 2, 00:08:32.066 "num_base_bdevs_operational": 2, 00:08:32.066 "base_bdevs_list": [ 00:08:32.066 { 00:08:32.066 "name": null, 00:08:32.066 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:32.066 "is_configured": false, 00:08:32.066 "data_offset": 0, 00:08:32.066 "data_size": 65536 00:08:32.066 }, 00:08:32.066 { 00:08:32.066 "name": "BaseBdev2", 00:08:32.066 "uuid": "cc2c9c7b-3e20-4561-a0f6-817ecd51d525", 00:08:32.066 "is_configured": true, 00:08:32.066 "data_offset": 0, 00:08:32.066 "data_size": 65536 00:08:32.066 }, 00:08:32.066 { 00:08:32.066 "name": "BaseBdev3", 00:08:32.066 "uuid": "8367dcff-8110-46bc-aa49-845cd042a961", 00:08:32.066 "is_configured": true, 00:08:32.066 "data_offset": 0, 00:08:32.066 "data_size": 65536 00:08:32.066 } 00:08:32.066 ] 00:08:32.066 }' 00:08:32.066 23:26:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:32.066 23:26:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.326 23:26:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:32.326 23:26:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:32.326 23:26:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:32.326 23:26:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:32.326 23:26:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:32.326 23:26:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.326 23:26:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:32.326 23:26:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:32.326 23:26:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:32.326 23:26:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:32.326 23:26:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:32.326 23:26:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.326 [2024-09-30 23:26:12.073270] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:32.326 23:26:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:32.326 23:26:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:32.326 23:26:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:32.326 23:26:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:32.326 23:26:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:32.326 23:26:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:32.326 23:26:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.326 23:26:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:32.326 23:26:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:32.326 23:26:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:32.326 23:26:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:08:32.326 23:26:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:32.326 23:26:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.326 [2024-09-30 23:26:12.140314] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:32.326 [2024-09-30 23:26:12.140415] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline 00:08:32.326 23:26:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:32.326 23:26:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:32.326 23:26:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:32.326 23:26:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:32.326 23:26:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:32.326 23:26:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:32.326 23:26:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.326 23:26:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:32.586 23:26:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:32.586 23:26:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:32.586 23:26:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:08:32.586 23:26:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:08:32.586 23:26:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:32.586 23:26:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:32.586 23:26:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:32.586 23:26:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.586 BaseBdev2 00:08:32.586 23:26:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:32.586 23:26:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:08:32.586 23:26:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:08:32.586 23:26:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:32.586 23:26:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:08:32.586 23:26:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:32.586 23:26:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:32.586 23:26:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:32.587 23:26:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:32.587 23:26:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.587 23:26:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:32.587 23:26:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:32.587 23:26:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:32.587 23:26:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.587 [ 00:08:32.587 { 00:08:32.587 "name": "BaseBdev2", 00:08:32.587 "aliases": [ 00:08:32.587 "c8115b6f-0146-48f7-bce8-3a3ce4744b94" 00:08:32.587 ], 00:08:32.587 "product_name": "Malloc disk", 00:08:32.587 "block_size": 512, 00:08:32.587 "num_blocks": 65536, 00:08:32.587 "uuid": "c8115b6f-0146-48f7-bce8-3a3ce4744b94", 00:08:32.587 "assigned_rate_limits": { 00:08:32.587 "rw_ios_per_sec": 0, 00:08:32.587 "rw_mbytes_per_sec": 0, 00:08:32.587 "r_mbytes_per_sec": 0, 00:08:32.587 "w_mbytes_per_sec": 0 00:08:32.587 }, 00:08:32.587 "claimed": false, 00:08:32.587 "zoned": false, 00:08:32.587 "supported_io_types": { 00:08:32.587 "read": true, 00:08:32.587 "write": true, 00:08:32.587 "unmap": true, 00:08:32.587 "flush": true, 00:08:32.587 "reset": true, 00:08:32.587 "nvme_admin": false, 00:08:32.587 "nvme_io": false, 00:08:32.587 "nvme_io_md": false, 00:08:32.587 "write_zeroes": true, 00:08:32.587 "zcopy": true, 00:08:32.587 "get_zone_info": false, 00:08:32.587 "zone_management": false, 00:08:32.587 "zone_append": false, 00:08:32.587 "compare": false, 00:08:32.587 "compare_and_write": false, 00:08:32.587 "abort": true, 00:08:32.587 "seek_hole": false, 00:08:32.587 "seek_data": false, 00:08:32.587 "copy": true, 00:08:32.587 "nvme_iov_md": false 00:08:32.587 }, 00:08:32.587 "memory_domains": [ 00:08:32.587 { 00:08:32.587 "dma_device_id": "system", 00:08:32.587 "dma_device_type": 1 00:08:32.587 }, 00:08:32.587 { 00:08:32.587 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:32.587 "dma_device_type": 2 00:08:32.587 } 00:08:32.587 ], 00:08:32.587 "driver_specific": {} 00:08:32.587 } 00:08:32.587 ] 00:08:32.587 23:26:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:32.587 23:26:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:08:32.587 23:26:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:32.587 23:26:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:32.587 23:26:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:32.587 23:26:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:32.587 23:26:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.587 BaseBdev3 00:08:32.587 23:26:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:32.587 23:26:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:08:32.587 23:26:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:08:32.587 23:26:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:32.587 23:26:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:08:32.587 23:26:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:32.587 23:26:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:32.587 23:26:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:32.587 23:26:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:32.587 23:26:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.587 23:26:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:32.587 23:26:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:32.587 23:26:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:32.587 23:26:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.587 [ 00:08:32.587 { 00:08:32.587 "name": "BaseBdev3", 00:08:32.587 "aliases": [ 00:08:32.587 "31c80bad-7b64-4ef0-8b11-511c04148ee5" 00:08:32.587 ], 00:08:32.587 "product_name": "Malloc disk", 00:08:32.587 "block_size": 512, 00:08:32.587 "num_blocks": 65536, 00:08:32.587 "uuid": "31c80bad-7b64-4ef0-8b11-511c04148ee5", 00:08:32.587 "assigned_rate_limits": { 00:08:32.587 "rw_ios_per_sec": 0, 00:08:32.587 "rw_mbytes_per_sec": 0, 00:08:32.587 "r_mbytes_per_sec": 0, 00:08:32.587 "w_mbytes_per_sec": 0 00:08:32.587 }, 00:08:32.587 "claimed": false, 00:08:32.587 "zoned": false, 00:08:32.587 "supported_io_types": { 00:08:32.587 "read": true, 00:08:32.587 "write": true, 00:08:32.587 "unmap": true, 00:08:32.587 "flush": true, 00:08:32.587 "reset": true, 00:08:32.587 "nvme_admin": false, 00:08:32.587 "nvme_io": false, 00:08:32.587 "nvme_io_md": false, 00:08:32.587 "write_zeroes": true, 00:08:32.587 "zcopy": true, 00:08:32.587 "get_zone_info": false, 00:08:32.587 "zone_management": false, 00:08:32.587 "zone_append": false, 00:08:32.587 "compare": false, 00:08:32.587 "compare_and_write": false, 00:08:32.587 "abort": true, 00:08:32.587 "seek_hole": false, 00:08:32.587 "seek_data": false, 00:08:32.587 "copy": true, 00:08:32.587 "nvme_iov_md": false 00:08:32.587 }, 00:08:32.587 "memory_domains": [ 00:08:32.587 { 00:08:32.587 "dma_device_id": "system", 00:08:32.587 "dma_device_type": 1 00:08:32.587 }, 00:08:32.587 { 00:08:32.587 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:32.587 "dma_device_type": 2 00:08:32.587 } 00:08:32.587 ], 00:08:32.587 "driver_specific": {} 00:08:32.587 } 00:08:32.587 ] 00:08:32.587 23:26:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:32.587 23:26:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:08:32.587 23:26:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:32.587 23:26:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:32.587 23:26:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:32.587 23:26:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:32.587 23:26:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.587 [2024-09-30 23:26:12.315202] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:32.587 [2024-09-30 23:26:12.315319] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:32.587 [2024-09-30 23:26:12.315362] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:32.587 [2024-09-30 23:26:12.317208] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:32.587 23:26:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:32.587 23:26:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:32.587 23:26:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:32.587 23:26:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:32.587 23:26:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:32.587 23:26:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:32.587 23:26:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:32.587 23:26:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:32.587 23:26:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:32.587 23:26:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:32.587 23:26:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:32.587 23:26:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:32.587 23:26:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:32.587 23:26:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.587 23:26:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:32.587 23:26:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:32.587 23:26:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:32.587 "name": "Existed_Raid", 00:08:32.587 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:32.587 "strip_size_kb": 64, 00:08:32.587 "state": "configuring", 00:08:32.587 "raid_level": "concat", 00:08:32.587 "superblock": false, 00:08:32.587 "num_base_bdevs": 3, 00:08:32.587 "num_base_bdevs_discovered": 2, 00:08:32.587 "num_base_bdevs_operational": 3, 00:08:32.587 "base_bdevs_list": [ 00:08:32.587 { 00:08:32.587 "name": "BaseBdev1", 00:08:32.587 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:32.587 "is_configured": false, 00:08:32.587 "data_offset": 0, 00:08:32.587 "data_size": 0 00:08:32.587 }, 00:08:32.587 { 00:08:32.587 "name": "BaseBdev2", 00:08:32.587 "uuid": "c8115b6f-0146-48f7-bce8-3a3ce4744b94", 00:08:32.587 "is_configured": true, 00:08:32.587 "data_offset": 0, 00:08:32.587 "data_size": 65536 00:08:32.587 }, 00:08:32.587 { 00:08:32.587 "name": "BaseBdev3", 00:08:32.587 "uuid": "31c80bad-7b64-4ef0-8b11-511c04148ee5", 00:08:32.587 "is_configured": true, 00:08:32.587 "data_offset": 0, 00:08:32.587 "data_size": 65536 00:08:32.587 } 00:08:32.587 ] 00:08:32.587 }' 00:08:32.587 23:26:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:32.587 23:26:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.157 23:26:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:08:33.157 23:26:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:33.157 23:26:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.157 [2024-09-30 23:26:12.742911] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:33.157 23:26:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:33.157 23:26:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:33.157 23:26:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:33.157 23:26:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:33.157 23:26:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:33.157 23:26:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:33.157 23:26:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:33.157 23:26:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:33.157 23:26:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:33.157 23:26:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:33.157 23:26:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:33.157 23:26:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:33.157 23:26:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:33.157 23:26:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:33.157 23:26:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.157 23:26:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:33.157 23:26:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:33.157 "name": "Existed_Raid", 00:08:33.157 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:33.157 "strip_size_kb": 64, 00:08:33.157 "state": "configuring", 00:08:33.157 "raid_level": "concat", 00:08:33.157 "superblock": false, 00:08:33.157 "num_base_bdevs": 3, 00:08:33.157 "num_base_bdevs_discovered": 1, 00:08:33.157 "num_base_bdevs_operational": 3, 00:08:33.157 "base_bdevs_list": [ 00:08:33.157 { 00:08:33.157 "name": "BaseBdev1", 00:08:33.157 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:33.157 "is_configured": false, 00:08:33.157 "data_offset": 0, 00:08:33.157 "data_size": 0 00:08:33.157 }, 00:08:33.157 { 00:08:33.157 "name": null, 00:08:33.157 "uuid": "c8115b6f-0146-48f7-bce8-3a3ce4744b94", 00:08:33.157 "is_configured": false, 00:08:33.157 "data_offset": 0, 00:08:33.157 "data_size": 65536 00:08:33.157 }, 00:08:33.157 { 00:08:33.157 "name": "BaseBdev3", 00:08:33.157 "uuid": "31c80bad-7b64-4ef0-8b11-511c04148ee5", 00:08:33.157 "is_configured": true, 00:08:33.157 "data_offset": 0, 00:08:33.157 "data_size": 65536 00:08:33.157 } 00:08:33.157 ] 00:08:33.158 }' 00:08:33.158 23:26:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:33.158 23:26:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.417 23:26:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:33.417 23:26:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:33.417 23:26:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:33.417 23:26:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.417 23:26:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:33.417 23:26:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:08:33.417 23:26:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:33.417 23:26:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:33.417 23:26:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.417 [2024-09-30 23:26:13.205157] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:33.417 BaseBdev1 00:08:33.417 23:26:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:33.417 23:26:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:08:33.417 23:26:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:08:33.417 23:26:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:33.417 23:26:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:08:33.417 23:26:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:33.417 23:26:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:33.417 23:26:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:33.417 23:26:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:33.417 23:26:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.417 23:26:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:33.417 23:26:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:33.417 23:26:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:33.417 23:26:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.417 [ 00:08:33.417 { 00:08:33.417 "name": "BaseBdev1", 00:08:33.417 "aliases": [ 00:08:33.417 "72a4a90b-2eb3-470e-ab5c-a152fb6f7141" 00:08:33.417 ], 00:08:33.417 "product_name": "Malloc disk", 00:08:33.417 "block_size": 512, 00:08:33.417 "num_blocks": 65536, 00:08:33.417 "uuid": "72a4a90b-2eb3-470e-ab5c-a152fb6f7141", 00:08:33.417 "assigned_rate_limits": { 00:08:33.417 "rw_ios_per_sec": 0, 00:08:33.418 "rw_mbytes_per_sec": 0, 00:08:33.418 "r_mbytes_per_sec": 0, 00:08:33.418 "w_mbytes_per_sec": 0 00:08:33.418 }, 00:08:33.418 "claimed": true, 00:08:33.418 "claim_type": "exclusive_write", 00:08:33.418 "zoned": false, 00:08:33.418 "supported_io_types": { 00:08:33.418 "read": true, 00:08:33.418 "write": true, 00:08:33.418 "unmap": true, 00:08:33.418 "flush": true, 00:08:33.418 "reset": true, 00:08:33.418 "nvme_admin": false, 00:08:33.418 "nvme_io": false, 00:08:33.418 "nvme_io_md": false, 00:08:33.418 "write_zeroes": true, 00:08:33.418 "zcopy": true, 00:08:33.418 "get_zone_info": false, 00:08:33.418 "zone_management": false, 00:08:33.418 "zone_append": false, 00:08:33.418 "compare": false, 00:08:33.418 "compare_and_write": false, 00:08:33.418 "abort": true, 00:08:33.418 "seek_hole": false, 00:08:33.418 "seek_data": false, 00:08:33.418 "copy": true, 00:08:33.418 "nvme_iov_md": false 00:08:33.418 }, 00:08:33.418 "memory_domains": [ 00:08:33.418 { 00:08:33.418 "dma_device_id": "system", 00:08:33.418 "dma_device_type": 1 00:08:33.418 }, 00:08:33.418 { 00:08:33.418 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:33.418 "dma_device_type": 2 00:08:33.418 } 00:08:33.418 ], 00:08:33.418 "driver_specific": {} 00:08:33.418 } 00:08:33.418 ] 00:08:33.418 23:26:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:33.418 23:26:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:08:33.418 23:26:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:33.418 23:26:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:33.418 23:26:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:33.418 23:26:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:33.418 23:26:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:33.418 23:26:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:33.418 23:26:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:33.418 23:26:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:33.418 23:26:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:33.418 23:26:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:33.418 23:26:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:33.418 23:26:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:33.418 23:26:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:33.418 23:26:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.418 23:26:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:33.678 23:26:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:33.678 "name": "Existed_Raid", 00:08:33.678 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:33.678 "strip_size_kb": 64, 00:08:33.678 "state": "configuring", 00:08:33.678 "raid_level": "concat", 00:08:33.678 "superblock": false, 00:08:33.678 "num_base_bdevs": 3, 00:08:33.678 "num_base_bdevs_discovered": 2, 00:08:33.678 "num_base_bdevs_operational": 3, 00:08:33.678 "base_bdevs_list": [ 00:08:33.678 { 00:08:33.678 "name": "BaseBdev1", 00:08:33.678 "uuid": "72a4a90b-2eb3-470e-ab5c-a152fb6f7141", 00:08:33.678 "is_configured": true, 00:08:33.678 "data_offset": 0, 00:08:33.678 "data_size": 65536 00:08:33.678 }, 00:08:33.678 { 00:08:33.678 "name": null, 00:08:33.678 "uuid": "c8115b6f-0146-48f7-bce8-3a3ce4744b94", 00:08:33.678 "is_configured": false, 00:08:33.678 "data_offset": 0, 00:08:33.678 "data_size": 65536 00:08:33.678 }, 00:08:33.678 { 00:08:33.678 "name": "BaseBdev3", 00:08:33.678 "uuid": "31c80bad-7b64-4ef0-8b11-511c04148ee5", 00:08:33.678 "is_configured": true, 00:08:33.678 "data_offset": 0, 00:08:33.678 "data_size": 65536 00:08:33.678 } 00:08:33.678 ] 00:08:33.678 }' 00:08:33.678 23:26:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:33.678 23:26:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.939 23:26:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:33.939 23:26:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:33.939 23:26:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:33.939 23:26:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.939 23:26:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:33.939 23:26:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:08:33.939 23:26:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:08:33.939 23:26:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:33.939 23:26:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.939 [2024-09-30 23:26:13.676424] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:33.939 23:26:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:33.939 23:26:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:33.939 23:26:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:33.939 23:26:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:33.939 23:26:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:33.939 23:26:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:33.939 23:26:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:33.939 23:26:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:33.939 23:26:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:33.939 23:26:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:33.939 23:26:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:33.939 23:26:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:33.939 23:26:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:33.939 23:26:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.939 23:26:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:33.939 23:26:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:33.939 23:26:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:33.939 "name": "Existed_Raid", 00:08:33.939 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:33.939 "strip_size_kb": 64, 00:08:33.939 "state": "configuring", 00:08:33.939 "raid_level": "concat", 00:08:33.939 "superblock": false, 00:08:33.939 "num_base_bdevs": 3, 00:08:33.939 "num_base_bdevs_discovered": 1, 00:08:33.939 "num_base_bdevs_operational": 3, 00:08:33.939 "base_bdevs_list": [ 00:08:33.939 { 00:08:33.939 "name": "BaseBdev1", 00:08:33.939 "uuid": "72a4a90b-2eb3-470e-ab5c-a152fb6f7141", 00:08:33.939 "is_configured": true, 00:08:33.939 "data_offset": 0, 00:08:33.939 "data_size": 65536 00:08:33.939 }, 00:08:33.939 { 00:08:33.939 "name": null, 00:08:33.939 "uuid": "c8115b6f-0146-48f7-bce8-3a3ce4744b94", 00:08:33.939 "is_configured": false, 00:08:33.939 "data_offset": 0, 00:08:33.939 "data_size": 65536 00:08:33.939 }, 00:08:33.939 { 00:08:33.939 "name": null, 00:08:33.939 "uuid": "31c80bad-7b64-4ef0-8b11-511c04148ee5", 00:08:33.939 "is_configured": false, 00:08:33.939 "data_offset": 0, 00:08:33.939 "data_size": 65536 00:08:33.939 } 00:08:33.939 ] 00:08:33.939 }' 00:08:33.939 23:26:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:33.939 23:26:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.510 23:26:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:34.510 23:26:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:34.510 23:26:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:34.510 23:26:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.510 23:26:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:34.510 23:26:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:08:34.510 23:26:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:08:34.510 23:26:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:34.510 23:26:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.510 [2024-09-30 23:26:14.135702] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:34.510 23:26:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:34.510 23:26:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:34.510 23:26:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:34.510 23:26:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:34.510 23:26:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:34.510 23:26:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:34.510 23:26:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:34.510 23:26:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:34.510 23:26:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:34.510 23:26:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:34.510 23:26:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:34.510 23:26:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:34.510 23:26:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:34.510 23:26:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:34.510 23:26:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.510 23:26:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:34.510 23:26:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:34.510 "name": "Existed_Raid", 00:08:34.510 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:34.510 "strip_size_kb": 64, 00:08:34.510 "state": "configuring", 00:08:34.510 "raid_level": "concat", 00:08:34.510 "superblock": false, 00:08:34.510 "num_base_bdevs": 3, 00:08:34.510 "num_base_bdevs_discovered": 2, 00:08:34.510 "num_base_bdevs_operational": 3, 00:08:34.510 "base_bdevs_list": [ 00:08:34.510 { 00:08:34.510 "name": "BaseBdev1", 00:08:34.510 "uuid": "72a4a90b-2eb3-470e-ab5c-a152fb6f7141", 00:08:34.510 "is_configured": true, 00:08:34.510 "data_offset": 0, 00:08:34.510 "data_size": 65536 00:08:34.510 }, 00:08:34.510 { 00:08:34.510 "name": null, 00:08:34.510 "uuid": "c8115b6f-0146-48f7-bce8-3a3ce4744b94", 00:08:34.510 "is_configured": false, 00:08:34.510 "data_offset": 0, 00:08:34.510 "data_size": 65536 00:08:34.510 }, 00:08:34.510 { 00:08:34.510 "name": "BaseBdev3", 00:08:34.510 "uuid": "31c80bad-7b64-4ef0-8b11-511c04148ee5", 00:08:34.510 "is_configured": true, 00:08:34.510 "data_offset": 0, 00:08:34.510 "data_size": 65536 00:08:34.510 } 00:08:34.510 ] 00:08:34.510 }' 00:08:34.510 23:26:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:34.510 23:26:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.770 23:26:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:34.770 23:26:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:34.770 23:26:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.770 23:26:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:34.770 23:26:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:34.770 23:26:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:08:34.770 23:26:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:34.770 23:26:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:34.770 23:26:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.770 [2024-09-30 23:26:14.606921] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:34.770 23:26:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:34.770 23:26:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:34.770 23:26:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:34.770 23:26:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:34.770 23:26:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:34.770 23:26:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:34.770 23:26:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:34.770 23:26:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:34.770 23:26:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:34.770 23:26:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:34.770 23:26:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:35.030 23:26:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:35.030 23:26:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:35.030 23:26:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:35.030 23:26:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.030 23:26:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:35.030 23:26:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:35.030 "name": "Existed_Raid", 00:08:35.030 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:35.030 "strip_size_kb": 64, 00:08:35.030 "state": "configuring", 00:08:35.030 "raid_level": "concat", 00:08:35.030 "superblock": false, 00:08:35.030 "num_base_bdevs": 3, 00:08:35.030 "num_base_bdevs_discovered": 1, 00:08:35.030 "num_base_bdevs_operational": 3, 00:08:35.030 "base_bdevs_list": [ 00:08:35.030 { 00:08:35.030 "name": null, 00:08:35.030 "uuid": "72a4a90b-2eb3-470e-ab5c-a152fb6f7141", 00:08:35.030 "is_configured": false, 00:08:35.030 "data_offset": 0, 00:08:35.030 "data_size": 65536 00:08:35.030 }, 00:08:35.030 { 00:08:35.030 "name": null, 00:08:35.030 "uuid": "c8115b6f-0146-48f7-bce8-3a3ce4744b94", 00:08:35.030 "is_configured": false, 00:08:35.030 "data_offset": 0, 00:08:35.030 "data_size": 65536 00:08:35.030 }, 00:08:35.030 { 00:08:35.030 "name": "BaseBdev3", 00:08:35.030 "uuid": "31c80bad-7b64-4ef0-8b11-511c04148ee5", 00:08:35.030 "is_configured": true, 00:08:35.030 "data_offset": 0, 00:08:35.030 "data_size": 65536 00:08:35.030 } 00:08:35.030 ] 00:08:35.030 }' 00:08:35.030 23:26:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:35.030 23:26:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.290 23:26:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:35.290 23:26:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:35.290 23:26:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:35.290 23:26:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.290 23:26:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:35.290 23:26:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:08:35.290 23:26:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:08:35.290 23:26:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:35.290 23:26:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.290 [2024-09-30 23:26:15.100606] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:35.290 23:26:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:35.290 23:26:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:35.290 23:26:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:35.290 23:26:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:35.290 23:26:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:35.290 23:26:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:35.290 23:26:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:35.290 23:26:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:35.290 23:26:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:35.290 23:26:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:35.290 23:26:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:35.290 23:26:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:35.290 23:26:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:35.290 23:26:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:35.290 23:26:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.290 23:26:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:35.290 23:26:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:35.290 "name": "Existed_Raid", 00:08:35.290 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:35.290 "strip_size_kb": 64, 00:08:35.290 "state": "configuring", 00:08:35.290 "raid_level": "concat", 00:08:35.290 "superblock": false, 00:08:35.290 "num_base_bdevs": 3, 00:08:35.290 "num_base_bdevs_discovered": 2, 00:08:35.290 "num_base_bdevs_operational": 3, 00:08:35.290 "base_bdevs_list": [ 00:08:35.290 { 00:08:35.290 "name": null, 00:08:35.290 "uuid": "72a4a90b-2eb3-470e-ab5c-a152fb6f7141", 00:08:35.290 "is_configured": false, 00:08:35.290 "data_offset": 0, 00:08:35.290 "data_size": 65536 00:08:35.290 }, 00:08:35.290 { 00:08:35.290 "name": "BaseBdev2", 00:08:35.290 "uuid": "c8115b6f-0146-48f7-bce8-3a3ce4744b94", 00:08:35.290 "is_configured": true, 00:08:35.290 "data_offset": 0, 00:08:35.290 "data_size": 65536 00:08:35.290 }, 00:08:35.290 { 00:08:35.290 "name": "BaseBdev3", 00:08:35.290 "uuid": "31c80bad-7b64-4ef0-8b11-511c04148ee5", 00:08:35.290 "is_configured": true, 00:08:35.290 "data_offset": 0, 00:08:35.290 "data_size": 65536 00:08:35.290 } 00:08:35.290 ] 00:08:35.290 }' 00:08:35.290 23:26:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:35.290 23:26:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.861 23:26:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:35.861 23:26:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:35.861 23:26:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.861 23:26:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:35.861 23:26:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:35.861 23:26:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:08:35.861 23:26:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:35.861 23:26:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:35.861 23:26:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.861 23:26:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:08:35.861 23:26:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:35.861 23:26:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 72a4a90b-2eb3-470e-ab5c-a152fb6f7141 00:08:35.861 23:26:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:35.861 23:26:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.861 [2024-09-30 23:26:15.654602] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:08:35.861 [2024-09-30 23:26:15.654712] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:08:35.861 [2024-09-30 23:26:15.654741] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:08:35.861 [2024-09-30 23:26:15.655043] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:08:35.862 [2024-09-30 23:26:15.655204] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:08:35.862 [2024-09-30 23:26:15.655245] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006d00 00:08:35.862 [2024-09-30 23:26:15.655460] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:35.862 NewBaseBdev 00:08:35.862 23:26:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:35.862 23:26:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:08:35.862 23:26:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:08:35.862 23:26:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:35.862 23:26:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:08:35.862 23:26:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:35.862 23:26:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:35.862 23:26:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:35.862 23:26:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:35.862 23:26:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.862 23:26:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:35.862 23:26:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:08:35.862 23:26:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:35.862 23:26:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.862 [ 00:08:35.862 { 00:08:35.862 "name": "NewBaseBdev", 00:08:35.862 "aliases": [ 00:08:35.862 "72a4a90b-2eb3-470e-ab5c-a152fb6f7141" 00:08:35.862 ], 00:08:35.862 "product_name": "Malloc disk", 00:08:35.862 "block_size": 512, 00:08:35.862 "num_blocks": 65536, 00:08:35.862 "uuid": "72a4a90b-2eb3-470e-ab5c-a152fb6f7141", 00:08:35.862 "assigned_rate_limits": { 00:08:35.862 "rw_ios_per_sec": 0, 00:08:35.862 "rw_mbytes_per_sec": 0, 00:08:35.862 "r_mbytes_per_sec": 0, 00:08:35.862 "w_mbytes_per_sec": 0 00:08:35.862 }, 00:08:35.862 "claimed": true, 00:08:35.862 "claim_type": "exclusive_write", 00:08:35.862 "zoned": false, 00:08:35.862 "supported_io_types": { 00:08:35.862 "read": true, 00:08:35.862 "write": true, 00:08:35.862 "unmap": true, 00:08:35.862 "flush": true, 00:08:35.862 "reset": true, 00:08:35.862 "nvme_admin": false, 00:08:35.862 "nvme_io": false, 00:08:35.862 "nvme_io_md": false, 00:08:35.862 "write_zeroes": true, 00:08:35.862 "zcopy": true, 00:08:35.862 "get_zone_info": false, 00:08:35.862 "zone_management": false, 00:08:35.862 "zone_append": false, 00:08:35.862 "compare": false, 00:08:35.862 "compare_and_write": false, 00:08:35.862 "abort": true, 00:08:35.862 "seek_hole": false, 00:08:35.862 "seek_data": false, 00:08:35.862 "copy": true, 00:08:35.862 "nvme_iov_md": false 00:08:35.862 }, 00:08:35.862 "memory_domains": [ 00:08:35.862 { 00:08:35.862 "dma_device_id": "system", 00:08:35.862 "dma_device_type": 1 00:08:35.862 }, 00:08:35.862 { 00:08:35.862 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:35.862 "dma_device_type": 2 00:08:35.862 } 00:08:35.862 ], 00:08:35.862 "driver_specific": {} 00:08:35.862 } 00:08:35.862 ] 00:08:35.862 23:26:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:35.862 23:26:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:08:35.862 23:26:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:08:35.862 23:26:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:35.862 23:26:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:35.862 23:26:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:35.862 23:26:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:35.862 23:26:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:35.862 23:26:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:35.862 23:26:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:35.862 23:26:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:35.862 23:26:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:35.862 23:26:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:35.862 23:26:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:35.862 23:26:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:35.862 23:26:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.121 23:26:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:36.121 23:26:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:36.121 "name": "Existed_Raid", 00:08:36.121 "uuid": "e3a5a313-0d16-4480-9b24-fe03aa07dcfd", 00:08:36.121 "strip_size_kb": 64, 00:08:36.121 "state": "online", 00:08:36.121 "raid_level": "concat", 00:08:36.121 "superblock": false, 00:08:36.121 "num_base_bdevs": 3, 00:08:36.121 "num_base_bdevs_discovered": 3, 00:08:36.121 "num_base_bdevs_operational": 3, 00:08:36.121 "base_bdevs_list": [ 00:08:36.121 { 00:08:36.121 "name": "NewBaseBdev", 00:08:36.121 "uuid": "72a4a90b-2eb3-470e-ab5c-a152fb6f7141", 00:08:36.121 "is_configured": true, 00:08:36.121 "data_offset": 0, 00:08:36.121 "data_size": 65536 00:08:36.121 }, 00:08:36.121 { 00:08:36.121 "name": "BaseBdev2", 00:08:36.121 "uuid": "c8115b6f-0146-48f7-bce8-3a3ce4744b94", 00:08:36.121 "is_configured": true, 00:08:36.121 "data_offset": 0, 00:08:36.121 "data_size": 65536 00:08:36.121 }, 00:08:36.121 { 00:08:36.121 "name": "BaseBdev3", 00:08:36.121 "uuid": "31c80bad-7b64-4ef0-8b11-511c04148ee5", 00:08:36.121 "is_configured": true, 00:08:36.121 "data_offset": 0, 00:08:36.121 "data_size": 65536 00:08:36.121 } 00:08:36.121 ] 00:08:36.121 }' 00:08:36.121 23:26:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:36.121 23:26:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.381 23:26:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:08:36.381 23:26:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:36.381 23:26:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:36.381 23:26:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:36.381 23:26:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:36.381 23:26:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:36.381 23:26:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:36.381 23:26:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:36.381 23:26:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:36.381 23:26:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.381 [2024-09-30 23:26:16.162094] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:36.381 23:26:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:36.381 23:26:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:36.381 "name": "Existed_Raid", 00:08:36.381 "aliases": [ 00:08:36.381 "e3a5a313-0d16-4480-9b24-fe03aa07dcfd" 00:08:36.381 ], 00:08:36.381 "product_name": "Raid Volume", 00:08:36.381 "block_size": 512, 00:08:36.381 "num_blocks": 196608, 00:08:36.381 "uuid": "e3a5a313-0d16-4480-9b24-fe03aa07dcfd", 00:08:36.381 "assigned_rate_limits": { 00:08:36.381 "rw_ios_per_sec": 0, 00:08:36.381 "rw_mbytes_per_sec": 0, 00:08:36.381 "r_mbytes_per_sec": 0, 00:08:36.381 "w_mbytes_per_sec": 0 00:08:36.381 }, 00:08:36.381 "claimed": false, 00:08:36.381 "zoned": false, 00:08:36.381 "supported_io_types": { 00:08:36.381 "read": true, 00:08:36.381 "write": true, 00:08:36.381 "unmap": true, 00:08:36.381 "flush": true, 00:08:36.381 "reset": true, 00:08:36.381 "nvme_admin": false, 00:08:36.381 "nvme_io": false, 00:08:36.381 "nvme_io_md": false, 00:08:36.381 "write_zeroes": true, 00:08:36.381 "zcopy": false, 00:08:36.381 "get_zone_info": false, 00:08:36.381 "zone_management": false, 00:08:36.381 "zone_append": false, 00:08:36.381 "compare": false, 00:08:36.381 "compare_and_write": false, 00:08:36.381 "abort": false, 00:08:36.381 "seek_hole": false, 00:08:36.381 "seek_data": false, 00:08:36.381 "copy": false, 00:08:36.381 "nvme_iov_md": false 00:08:36.381 }, 00:08:36.381 "memory_domains": [ 00:08:36.381 { 00:08:36.381 "dma_device_id": "system", 00:08:36.381 "dma_device_type": 1 00:08:36.381 }, 00:08:36.381 { 00:08:36.381 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:36.381 "dma_device_type": 2 00:08:36.381 }, 00:08:36.381 { 00:08:36.381 "dma_device_id": "system", 00:08:36.381 "dma_device_type": 1 00:08:36.381 }, 00:08:36.381 { 00:08:36.381 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:36.381 "dma_device_type": 2 00:08:36.381 }, 00:08:36.381 { 00:08:36.381 "dma_device_id": "system", 00:08:36.381 "dma_device_type": 1 00:08:36.381 }, 00:08:36.381 { 00:08:36.381 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:36.381 "dma_device_type": 2 00:08:36.381 } 00:08:36.381 ], 00:08:36.381 "driver_specific": { 00:08:36.381 "raid": { 00:08:36.381 "uuid": "e3a5a313-0d16-4480-9b24-fe03aa07dcfd", 00:08:36.381 "strip_size_kb": 64, 00:08:36.381 "state": "online", 00:08:36.381 "raid_level": "concat", 00:08:36.381 "superblock": false, 00:08:36.381 "num_base_bdevs": 3, 00:08:36.381 "num_base_bdevs_discovered": 3, 00:08:36.381 "num_base_bdevs_operational": 3, 00:08:36.381 "base_bdevs_list": [ 00:08:36.381 { 00:08:36.381 "name": "NewBaseBdev", 00:08:36.381 "uuid": "72a4a90b-2eb3-470e-ab5c-a152fb6f7141", 00:08:36.381 "is_configured": true, 00:08:36.381 "data_offset": 0, 00:08:36.381 "data_size": 65536 00:08:36.381 }, 00:08:36.381 { 00:08:36.381 "name": "BaseBdev2", 00:08:36.381 "uuid": "c8115b6f-0146-48f7-bce8-3a3ce4744b94", 00:08:36.381 "is_configured": true, 00:08:36.381 "data_offset": 0, 00:08:36.381 "data_size": 65536 00:08:36.381 }, 00:08:36.382 { 00:08:36.382 "name": "BaseBdev3", 00:08:36.382 "uuid": "31c80bad-7b64-4ef0-8b11-511c04148ee5", 00:08:36.382 "is_configured": true, 00:08:36.382 "data_offset": 0, 00:08:36.382 "data_size": 65536 00:08:36.382 } 00:08:36.382 ] 00:08:36.382 } 00:08:36.382 } 00:08:36.382 }' 00:08:36.382 23:26:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:36.641 23:26:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:08:36.641 BaseBdev2 00:08:36.641 BaseBdev3' 00:08:36.641 23:26:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:36.641 23:26:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:36.641 23:26:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:36.641 23:26:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:08:36.641 23:26:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:36.641 23:26:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:36.642 23:26:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.642 23:26:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:36.642 23:26:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:36.642 23:26:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:36.642 23:26:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:36.642 23:26:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:36.642 23:26:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:36.642 23:26:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.642 23:26:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:36.642 23:26:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:36.642 23:26:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:36.642 23:26:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:36.642 23:26:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:36.642 23:26:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:36.642 23:26:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:36.642 23:26:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:36.642 23:26:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.642 23:26:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:36.642 23:26:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:36.642 23:26:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:36.642 23:26:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:36.642 23:26:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:36.642 23:26:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.642 [2024-09-30 23:26:16.401374] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:36.642 [2024-09-30 23:26:16.401403] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:36.642 [2024-09-30 23:26:16.401478] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:36.642 [2024-09-30 23:26:16.401532] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:36.642 [2024-09-30 23:26:16.401543] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name Existed_Raid, state offline 00:08:36.642 23:26:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:36.642 23:26:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 76790 00:08:36.642 23:26:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 76790 ']' 00:08:36.642 23:26:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 76790 00:08:36.642 23:26:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:08:36.642 23:26:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:36.642 23:26:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 76790 00:08:36.642 killing process with pid 76790 00:08:36.642 23:26:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:36.642 23:26:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:36.642 23:26:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 76790' 00:08:36.642 23:26:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 76790 00:08:36.642 [2024-09-30 23:26:16.441254] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:36.642 23:26:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 76790 00:08:36.642 [2024-09-30 23:26:16.471545] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:36.901 23:26:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:08:36.901 00:08:36.901 real 0m8.825s 00:08:36.901 user 0m15.026s 00:08:36.901 sys 0m1.799s 00:08:36.901 ************************************ 00:08:36.901 END TEST raid_state_function_test 00:08:36.901 ************************************ 00:08:36.901 23:26:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:36.901 23:26:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.160 23:26:16 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 3 true 00:08:37.160 23:26:16 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:08:37.160 23:26:16 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:37.160 23:26:16 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:37.160 ************************************ 00:08:37.160 START TEST raid_state_function_test_sb 00:08:37.160 ************************************ 00:08:37.160 23:26:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test concat 3 true 00:08:37.160 23:26:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:08:37.160 23:26:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:08:37.160 23:26:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:08:37.160 23:26:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:37.160 23:26:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:37.160 23:26:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:37.160 23:26:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:37.160 23:26:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:37.160 23:26:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:37.160 23:26:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:37.160 23:26:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:37.160 23:26:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:37.160 23:26:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:08:37.160 23:26:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:37.160 23:26:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:37.160 23:26:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:08:37.160 23:26:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:37.160 23:26:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:37.160 23:26:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:37.160 23:26:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:37.160 23:26:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:37.160 23:26:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:08:37.160 23:26:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:08:37.160 23:26:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:08:37.160 23:26:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:08:37.160 23:26:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:08:37.160 23:26:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=77395 00:08:37.160 23:26:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:37.160 Process raid pid: 77395 00:08:37.160 23:26:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 77395' 00:08:37.160 23:26:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 77395 00:08:37.160 23:26:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 77395 ']' 00:08:37.160 23:26:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:37.160 23:26:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:37.160 23:26:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:37.160 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:37.160 23:26:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:37.160 23:26:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:37.160 [2024-09-30 23:26:16.882610] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:08:37.160 [2024-09-30 23:26:16.882838] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:37.419 [2024-09-30 23:26:17.045287] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:37.419 [2024-09-30 23:26:17.090699] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:37.419 [2024-09-30 23:26:17.132945] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:37.419 [2024-09-30 23:26:17.132981] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:37.987 23:26:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:37.987 23:26:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:08:37.987 23:26:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:37.987 23:26:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:37.987 23:26:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:37.987 [2024-09-30 23:26:17.706653] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:37.987 [2024-09-30 23:26:17.706716] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:37.987 [2024-09-30 23:26:17.706730] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:37.987 [2024-09-30 23:26:17.706740] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:37.987 [2024-09-30 23:26:17.706746] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:37.987 [2024-09-30 23:26:17.706759] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:37.987 23:26:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:37.987 23:26:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:37.987 23:26:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:37.987 23:26:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:37.987 23:26:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:37.987 23:26:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:37.987 23:26:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:37.987 23:26:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:37.987 23:26:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:37.987 23:26:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:37.987 23:26:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:37.987 23:26:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:37.987 23:26:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:37.987 23:26:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:37.987 23:26:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:37.987 23:26:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:37.987 23:26:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:37.987 "name": "Existed_Raid", 00:08:37.987 "uuid": "30305f38-7933-4a59-aefd-33786385c363", 00:08:37.987 "strip_size_kb": 64, 00:08:37.987 "state": "configuring", 00:08:37.987 "raid_level": "concat", 00:08:37.987 "superblock": true, 00:08:37.987 "num_base_bdevs": 3, 00:08:37.987 "num_base_bdevs_discovered": 0, 00:08:37.987 "num_base_bdevs_operational": 3, 00:08:37.987 "base_bdevs_list": [ 00:08:37.987 { 00:08:37.987 "name": "BaseBdev1", 00:08:37.987 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:37.987 "is_configured": false, 00:08:37.987 "data_offset": 0, 00:08:37.987 "data_size": 0 00:08:37.987 }, 00:08:37.987 { 00:08:37.987 "name": "BaseBdev2", 00:08:37.987 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:37.987 "is_configured": false, 00:08:37.987 "data_offset": 0, 00:08:37.987 "data_size": 0 00:08:37.987 }, 00:08:37.987 { 00:08:37.987 "name": "BaseBdev3", 00:08:37.987 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:37.987 "is_configured": false, 00:08:37.987 "data_offset": 0, 00:08:37.987 "data_size": 0 00:08:37.987 } 00:08:37.987 ] 00:08:37.987 }' 00:08:37.987 23:26:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:37.987 23:26:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:38.556 23:26:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:38.556 23:26:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:38.556 23:26:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:38.556 [2024-09-30 23:26:18.129909] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:38.556 [2024-09-30 23:26:18.129955] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring 00:08:38.556 23:26:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:38.556 23:26:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:38.556 23:26:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:38.556 23:26:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:38.556 [2024-09-30 23:26:18.141926] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:38.556 [2024-09-30 23:26:18.141963] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:38.556 [2024-09-30 23:26:18.141971] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:38.556 [2024-09-30 23:26:18.141980] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:38.556 [2024-09-30 23:26:18.141986] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:38.556 [2024-09-30 23:26:18.141995] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:38.556 23:26:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:38.556 23:26:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:38.556 23:26:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:38.556 23:26:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:38.556 [2024-09-30 23:26:18.162780] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:38.556 BaseBdev1 00:08:38.556 23:26:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:38.556 23:26:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:38.556 23:26:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:08:38.556 23:26:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:38.556 23:26:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:08:38.556 23:26:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:38.556 23:26:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:38.556 23:26:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:38.556 23:26:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:38.556 23:26:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:38.556 23:26:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:38.556 23:26:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:38.556 23:26:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:38.556 23:26:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:38.556 [ 00:08:38.556 { 00:08:38.556 "name": "BaseBdev1", 00:08:38.556 "aliases": [ 00:08:38.556 "b9a8b424-cd08-41de-8345-806fbc7b1e76" 00:08:38.556 ], 00:08:38.556 "product_name": "Malloc disk", 00:08:38.556 "block_size": 512, 00:08:38.556 "num_blocks": 65536, 00:08:38.556 "uuid": "b9a8b424-cd08-41de-8345-806fbc7b1e76", 00:08:38.556 "assigned_rate_limits": { 00:08:38.556 "rw_ios_per_sec": 0, 00:08:38.556 "rw_mbytes_per_sec": 0, 00:08:38.556 "r_mbytes_per_sec": 0, 00:08:38.556 "w_mbytes_per_sec": 0 00:08:38.556 }, 00:08:38.556 "claimed": true, 00:08:38.556 "claim_type": "exclusive_write", 00:08:38.556 "zoned": false, 00:08:38.556 "supported_io_types": { 00:08:38.556 "read": true, 00:08:38.556 "write": true, 00:08:38.556 "unmap": true, 00:08:38.556 "flush": true, 00:08:38.556 "reset": true, 00:08:38.556 "nvme_admin": false, 00:08:38.556 "nvme_io": false, 00:08:38.556 "nvme_io_md": false, 00:08:38.556 "write_zeroes": true, 00:08:38.556 "zcopy": true, 00:08:38.556 "get_zone_info": false, 00:08:38.556 "zone_management": false, 00:08:38.556 "zone_append": false, 00:08:38.556 "compare": false, 00:08:38.556 "compare_and_write": false, 00:08:38.556 "abort": true, 00:08:38.556 "seek_hole": false, 00:08:38.556 "seek_data": false, 00:08:38.556 "copy": true, 00:08:38.556 "nvme_iov_md": false 00:08:38.556 }, 00:08:38.556 "memory_domains": [ 00:08:38.556 { 00:08:38.556 "dma_device_id": "system", 00:08:38.556 "dma_device_type": 1 00:08:38.556 }, 00:08:38.556 { 00:08:38.556 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:38.556 "dma_device_type": 2 00:08:38.556 } 00:08:38.556 ], 00:08:38.556 "driver_specific": {} 00:08:38.556 } 00:08:38.556 ] 00:08:38.556 23:26:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:38.556 23:26:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:08:38.556 23:26:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:38.556 23:26:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:38.556 23:26:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:38.556 23:26:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:38.556 23:26:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:38.556 23:26:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:38.556 23:26:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:38.556 23:26:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:38.556 23:26:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:38.556 23:26:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:38.556 23:26:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:38.556 23:26:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:38.556 23:26:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:38.556 23:26:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:38.556 23:26:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:38.556 23:26:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:38.556 "name": "Existed_Raid", 00:08:38.556 "uuid": "89419470-aaf6-4ea2-880c-163884c6980e", 00:08:38.556 "strip_size_kb": 64, 00:08:38.556 "state": "configuring", 00:08:38.556 "raid_level": "concat", 00:08:38.556 "superblock": true, 00:08:38.556 "num_base_bdevs": 3, 00:08:38.556 "num_base_bdevs_discovered": 1, 00:08:38.556 "num_base_bdevs_operational": 3, 00:08:38.556 "base_bdevs_list": [ 00:08:38.557 { 00:08:38.557 "name": "BaseBdev1", 00:08:38.557 "uuid": "b9a8b424-cd08-41de-8345-806fbc7b1e76", 00:08:38.557 "is_configured": true, 00:08:38.557 "data_offset": 2048, 00:08:38.557 "data_size": 63488 00:08:38.557 }, 00:08:38.557 { 00:08:38.557 "name": "BaseBdev2", 00:08:38.557 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:38.557 "is_configured": false, 00:08:38.557 "data_offset": 0, 00:08:38.557 "data_size": 0 00:08:38.557 }, 00:08:38.557 { 00:08:38.557 "name": "BaseBdev3", 00:08:38.557 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:38.557 "is_configured": false, 00:08:38.557 "data_offset": 0, 00:08:38.557 "data_size": 0 00:08:38.557 } 00:08:38.557 ] 00:08:38.557 }' 00:08:38.557 23:26:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:38.557 23:26:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:38.815 23:26:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:38.815 23:26:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:38.815 23:26:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:39.074 [2024-09-30 23:26:18.669939] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:39.074 [2024-09-30 23:26:18.669983] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring 00:08:39.074 23:26:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:39.074 23:26:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:39.074 23:26:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:39.074 23:26:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:39.074 [2024-09-30 23:26:18.681964] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:39.074 [2024-09-30 23:26:18.683838] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:39.074 [2024-09-30 23:26:18.683932] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:39.074 [2024-09-30 23:26:18.683961] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:39.074 [2024-09-30 23:26:18.683985] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:39.074 23:26:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:39.074 23:26:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:39.074 23:26:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:39.074 23:26:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:39.074 23:26:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:39.074 23:26:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:39.075 23:26:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:39.075 23:26:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:39.075 23:26:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:39.075 23:26:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:39.075 23:26:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:39.075 23:26:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:39.075 23:26:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:39.075 23:26:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:39.075 23:26:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:39.075 23:26:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:39.075 23:26:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:39.075 23:26:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:39.075 23:26:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:39.075 "name": "Existed_Raid", 00:08:39.075 "uuid": "4b221a4b-9c46-4778-aab8-9cf285ed3db2", 00:08:39.075 "strip_size_kb": 64, 00:08:39.075 "state": "configuring", 00:08:39.075 "raid_level": "concat", 00:08:39.075 "superblock": true, 00:08:39.075 "num_base_bdevs": 3, 00:08:39.075 "num_base_bdevs_discovered": 1, 00:08:39.075 "num_base_bdevs_operational": 3, 00:08:39.075 "base_bdevs_list": [ 00:08:39.075 { 00:08:39.075 "name": "BaseBdev1", 00:08:39.075 "uuid": "b9a8b424-cd08-41de-8345-806fbc7b1e76", 00:08:39.075 "is_configured": true, 00:08:39.075 "data_offset": 2048, 00:08:39.075 "data_size": 63488 00:08:39.075 }, 00:08:39.075 { 00:08:39.075 "name": "BaseBdev2", 00:08:39.075 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:39.075 "is_configured": false, 00:08:39.075 "data_offset": 0, 00:08:39.075 "data_size": 0 00:08:39.075 }, 00:08:39.075 { 00:08:39.075 "name": "BaseBdev3", 00:08:39.075 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:39.075 "is_configured": false, 00:08:39.075 "data_offset": 0, 00:08:39.075 "data_size": 0 00:08:39.075 } 00:08:39.075 ] 00:08:39.075 }' 00:08:39.075 23:26:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:39.075 23:26:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:39.334 23:26:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:39.334 23:26:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:39.334 23:26:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:39.334 [2024-09-30 23:26:19.129188] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:39.334 BaseBdev2 00:08:39.334 23:26:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:39.334 23:26:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:39.334 23:26:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:08:39.334 23:26:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:39.334 23:26:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:08:39.334 23:26:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:39.334 23:26:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:39.334 23:26:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:39.334 23:26:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:39.334 23:26:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:39.334 23:26:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:39.334 23:26:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:39.334 23:26:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:39.334 23:26:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:39.334 [ 00:08:39.334 { 00:08:39.334 "name": "BaseBdev2", 00:08:39.334 "aliases": [ 00:08:39.334 "fa98360a-413a-4492-9a4a-aa02e0df0cd9" 00:08:39.334 ], 00:08:39.334 "product_name": "Malloc disk", 00:08:39.334 "block_size": 512, 00:08:39.334 "num_blocks": 65536, 00:08:39.334 "uuid": "fa98360a-413a-4492-9a4a-aa02e0df0cd9", 00:08:39.334 "assigned_rate_limits": { 00:08:39.334 "rw_ios_per_sec": 0, 00:08:39.334 "rw_mbytes_per_sec": 0, 00:08:39.334 "r_mbytes_per_sec": 0, 00:08:39.334 "w_mbytes_per_sec": 0 00:08:39.334 }, 00:08:39.334 "claimed": true, 00:08:39.334 "claim_type": "exclusive_write", 00:08:39.334 "zoned": false, 00:08:39.334 "supported_io_types": { 00:08:39.334 "read": true, 00:08:39.334 "write": true, 00:08:39.334 "unmap": true, 00:08:39.334 "flush": true, 00:08:39.334 "reset": true, 00:08:39.334 "nvme_admin": false, 00:08:39.334 "nvme_io": false, 00:08:39.334 "nvme_io_md": false, 00:08:39.334 "write_zeroes": true, 00:08:39.334 "zcopy": true, 00:08:39.334 "get_zone_info": false, 00:08:39.334 "zone_management": false, 00:08:39.334 "zone_append": false, 00:08:39.334 "compare": false, 00:08:39.334 "compare_and_write": false, 00:08:39.334 "abort": true, 00:08:39.334 "seek_hole": false, 00:08:39.334 "seek_data": false, 00:08:39.334 "copy": true, 00:08:39.334 "nvme_iov_md": false 00:08:39.334 }, 00:08:39.334 "memory_domains": [ 00:08:39.334 { 00:08:39.334 "dma_device_id": "system", 00:08:39.334 "dma_device_type": 1 00:08:39.334 }, 00:08:39.334 { 00:08:39.334 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:39.334 "dma_device_type": 2 00:08:39.334 } 00:08:39.334 ], 00:08:39.334 "driver_specific": {} 00:08:39.334 } 00:08:39.334 ] 00:08:39.334 23:26:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:39.334 23:26:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:08:39.334 23:26:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:39.334 23:26:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:39.334 23:26:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:39.335 23:26:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:39.335 23:26:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:39.335 23:26:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:39.335 23:26:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:39.335 23:26:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:39.335 23:26:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:39.335 23:26:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:39.335 23:26:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:39.335 23:26:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:39.335 23:26:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:39.335 23:26:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:39.335 23:26:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:39.335 23:26:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:39.594 23:26:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:39.594 23:26:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:39.594 "name": "Existed_Raid", 00:08:39.594 "uuid": "4b221a4b-9c46-4778-aab8-9cf285ed3db2", 00:08:39.594 "strip_size_kb": 64, 00:08:39.594 "state": "configuring", 00:08:39.594 "raid_level": "concat", 00:08:39.594 "superblock": true, 00:08:39.594 "num_base_bdevs": 3, 00:08:39.594 "num_base_bdevs_discovered": 2, 00:08:39.594 "num_base_bdevs_operational": 3, 00:08:39.594 "base_bdevs_list": [ 00:08:39.594 { 00:08:39.594 "name": "BaseBdev1", 00:08:39.594 "uuid": "b9a8b424-cd08-41de-8345-806fbc7b1e76", 00:08:39.594 "is_configured": true, 00:08:39.594 "data_offset": 2048, 00:08:39.594 "data_size": 63488 00:08:39.594 }, 00:08:39.594 { 00:08:39.594 "name": "BaseBdev2", 00:08:39.594 "uuid": "fa98360a-413a-4492-9a4a-aa02e0df0cd9", 00:08:39.594 "is_configured": true, 00:08:39.594 "data_offset": 2048, 00:08:39.594 "data_size": 63488 00:08:39.594 }, 00:08:39.594 { 00:08:39.594 "name": "BaseBdev3", 00:08:39.594 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:39.594 "is_configured": false, 00:08:39.594 "data_offset": 0, 00:08:39.594 "data_size": 0 00:08:39.594 } 00:08:39.594 ] 00:08:39.594 }' 00:08:39.594 23:26:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:39.594 23:26:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:39.853 23:26:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:39.853 23:26:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:39.853 23:26:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:39.853 [2024-09-30 23:26:19.623321] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:39.853 [2024-09-30 23:26:19.623526] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:08:39.853 [2024-09-30 23:26:19.623544] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:39.853 BaseBdev3 00:08:39.853 [2024-09-30 23:26:19.623838] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:08:39.853 [2024-09-30 23:26:19.624001] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:08:39.853 [2024-09-30 23:26:19.624080] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980 00:08:39.853 [2024-09-30 23:26:19.624208] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:39.853 23:26:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:39.853 23:26:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:08:39.853 23:26:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:08:39.853 23:26:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:39.853 23:26:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:08:39.853 23:26:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:39.853 23:26:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:39.853 23:26:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:39.853 23:26:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:39.853 23:26:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:39.853 23:26:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:39.853 23:26:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:39.853 23:26:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:39.853 23:26:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:39.853 [ 00:08:39.853 { 00:08:39.853 "name": "BaseBdev3", 00:08:39.853 "aliases": [ 00:08:39.853 "3085fc4c-2ef8-4d6e-91e5-82b5e6d77874" 00:08:39.853 ], 00:08:39.853 "product_name": "Malloc disk", 00:08:39.853 "block_size": 512, 00:08:39.853 "num_blocks": 65536, 00:08:39.853 "uuid": "3085fc4c-2ef8-4d6e-91e5-82b5e6d77874", 00:08:39.853 "assigned_rate_limits": { 00:08:39.853 "rw_ios_per_sec": 0, 00:08:39.853 "rw_mbytes_per_sec": 0, 00:08:39.853 "r_mbytes_per_sec": 0, 00:08:39.853 "w_mbytes_per_sec": 0 00:08:39.853 }, 00:08:39.853 "claimed": true, 00:08:39.853 "claim_type": "exclusive_write", 00:08:39.853 "zoned": false, 00:08:39.853 "supported_io_types": { 00:08:39.853 "read": true, 00:08:39.853 "write": true, 00:08:39.853 "unmap": true, 00:08:39.853 "flush": true, 00:08:39.853 "reset": true, 00:08:39.853 "nvme_admin": false, 00:08:39.853 "nvme_io": false, 00:08:39.853 "nvme_io_md": false, 00:08:39.853 "write_zeroes": true, 00:08:39.853 "zcopy": true, 00:08:39.853 "get_zone_info": false, 00:08:39.853 "zone_management": false, 00:08:39.853 "zone_append": false, 00:08:39.853 "compare": false, 00:08:39.853 "compare_and_write": false, 00:08:39.853 "abort": true, 00:08:39.853 "seek_hole": false, 00:08:39.853 "seek_data": false, 00:08:39.853 "copy": true, 00:08:39.853 "nvme_iov_md": false 00:08:39.853 }, 00:08:39.853 "memory_domains": [ 00:08:39.853 { 00:08:39.853 "dma_device_id": "system", 00:08:39.853 "dma_device_type": 1 00:08:39.853 }, 00:08:39.853 { 00:08:39.853 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:39.853 "dma_device_type": 2 00:08:39.853 } 00:08:39.853 ], 00:08:39.853 "driver_specific": {} 00:08:39.853 } 00:08:39.853 ] 00:08:39.853 23:26:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:39.853 23:26:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:08:39.853 23:26:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:39.853 23:26:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:39.853 23:26:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:08:39.853 23:26:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:39.853 23:26:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:39.853 23:26:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:39.853 23:26:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:39.853 23:26:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:39.854 23:26:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:39.854 23:26:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:39.854 23:26:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:39.854 23:26:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:39.854 23:26:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:39.854 23:26:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:39.854 23:26:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:39.854 23:26:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:39.854 23:26:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.112 23:26:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:40.112 "name": "Existed_Raid", 00:08:40.112 "uuid": "4b221a4b-9c46-4778-aab8-9cf285ed3db2", 00:08:40.112 "strip_size_kb": 64, 00:08:40.112 "state": "online", 00:08:40.112 "raid_level": "concat", 00:08:40.112 "superblock": true, 00:08:40.112 "num_base_bdevs": 3, 00:08:40.112 "num_base_bdevs_discovered": 3, 00:08:40.112 "num_base_bdevs_operational": 3, 00:08:40.112 "base_bdevs_list": [ 00:08:40.112 { 00:08:40.112 "name": "BaseBdev1", 00:08:40.112 "uuid": "b9a8b424-cd08-41de-8345-806fbc7b1e76", 00:08:40.112 "is_configured": true, 00:08:40.112 "data_offset": 2048, 00:08:40.112 "data_size": 63488 00:08:40.112 }, 00:08:40.112 { 00:08:40.112 "name": "BaseBdev2", 00:08:40.112 "uuid": "fa98360a-413a-4492-9a4a-aa02e0df0cd9", 00:08:40.112 "is_configured": true, 00:08:40.112 "data_offset": 2048, 00:08:40.112 "data_size": 63488 00:08:40.112 }, 00:08:40.112 { 00:08:40.112 "name": "BaseBdev3", 00:08:40.112 "uuid": "3085fc4c-2ef8-4d6e-91e5-82b5e6d77874", 00:08:40.112 "is_configured": true, 00:08:40.112 "data_offset": 2048, 00:08:40.112 "data_size": 63488 00:08:40.112 } 00:08:40.112 ] 00:08:40.112 }' 00:08:40.112 23:26:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:40.112 23:26:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:40.371 23:26:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:40.371 23:26:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:40.371 23:26:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:40.371 23:26:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:40.371 23:26:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:08:40.371 23:26:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:40.371 23:26:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:40.371 23:26:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:40.371 23:26:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.371 23:26:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:40.371 [2024-09-30 23:26:20.066916] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:40.371 23:26:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.371 23:26:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:40.371 "name": "Existed_Raid", 00:08:40.371 "aliases": [ 00:08:40.371 "4b221a4b-9c46-4778-aab8-9cf285ed3db2" 00:08:40.371 ], 00:08:40.371 "product_name": "Raid Volume", 00:08:40.371 "block_size": 512, 00:08:40.371 "num_blocks": 190464, 00:08:40.371 "uuid": "4b221a4b-9c46-4778-aab8-9cf285ed3db2", 00:08:40.371 "assigned_rate_limits": { 00:08:40.371 "rw_ios_per_sec": 0, 00:08:40.371 "rw_mbytes_per_sec": 0, 00:08:40.371 "r_mbytes_per_sec": 0, 00:08:40.371 "w_mbytes_per_sec": 0 00:08:40.371 }, 00:08:40.371 "claimed": false, 00:08:40.371 "zoned": false, 00:08:40.371 "supported_io_types": { 00:08:40.371 "read": true, 00:08:40.371 "write": true, 00:08:40.371 "unmap": true, 00:08:40.371 "flush": true, 00:08:40.371 "reset": true, 00:08:40.371 "nvme_admin": false, 00:08:40.371 "nvme_io": false, 00:08:40.371 "nvme_io_md": false, 00:08:40.371 "write_zeroes": true, 00:08:40.371 "zcopy": false, 00:08:40.371 "get_zone_info": false, 00:08:40.371 "zone_management": false, 00:08:40.371 "zone_append": false, 00:08:40.371 "compare": false, 00:08:40.371 "compare_and_write": false, 00:08:40.371 "abort": false, 00:08:40.371 "seek_hole": false, 00:08:40.371 "seek_data": false, 00:08:40.371 "copy": false, 00:08:40.371 "nvme_iov_md": false 00:08:40.371 }, 00:08:40.371 "memory_domains": [ 00:08:40.371 { 00:08:40.371 "dma_device_id": "system", 00:08:40.371 "dma_device_type": 1 00:08:40.371 }, 00:08:40.371 { 00:08:40.371 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:40.371 "dma_device_type": 2 00:08:40.371 }, 00:08:40.371 { 00:08:40.371 "dma_device_id": "system", 00:08:40.371 "dma_device_type": 1 00:08:40.371 }, 00:08:40.371 { 00:08:40.371 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:40.371 "dma_device_type": 2 00:08:40.371 }, 00:08:40.371 { 00:08:40.371 "dma_device_id": "system", 00:08:40.371 "dma_device_type": 1 00:08:40.371 }, 00:08:40.371 { 00:08:40.371 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:40.371 "dma_device_type": 2 00:08:40.371 } 00:08:40.371 ], 00:08:40.371 "driver_specific": { 00:08:40.371 "raid": { 00:08:40.371 "uuid": "4b221a4b-9c46-4778-aab8-9cf285ed3db2", 00:08:40.371 "strip_size_kb": 64, 00:08:40.371 "state": "online", 00:08:40.371 "raid_level": "concat", 00:08:40.371 "superblock": true, 00:08:40.371 "num_base_bdevs": 3, 00:08:40.371 "num_base_bdevs_discovered": 3, 00:08:40.371 "num_base_bdevs_operational": 3, 00:08:40.371 "base_bdevs_list": [ 00:08:40.371 { 00:08:40.371 "name": "BaseBdev1", 00:08:40.371 "uuid": "b9a8b424-cd08-41de-8345-806fbc7b1e76", 00:08:40.371 "is_configured": true, 00:08:40.371 "data_offset": 2048, 00:08:40.371 "data_size": 63488 00:08:40.371 }, 00:08:40.371 { 00:08:40.371 "name": "BaseBdev2", 00:08:40.371 "uuid": "fa98360a-413a-4492-9a4a-aa02e0df0cd9", 00:08:40.371 "is_configured": true, 00:08:40.371 "data_offset": 2048, 00:08:40.371 "data_size": 63488 00:08:40.371 }, 00:08:40.371 { 00:08:40.371 "name": "BaseBdev3", 00:08:40.371 "uuid": "3085fc4c-2ef8-4d6e-91e5-82b5e6d77874", 00:08:40.371 "is_configured": true, 00:08:40.371 "data_offset": 2048, 00:08:40.371 "data_size": 63488 00:08:40.371 } 00:08:40.371 ] 00:08:40.371 } 00:08:40.371 } 00:08:40.371 }' 00:08:40.371 23:26:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:40.371 23:26:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:40.371 BaseBdev2 00:08:40.371 BaseBdev3' 00:08:40.371 23:26:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:40.371 23:26:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:40.371 23:26:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:40.371 23:26:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:40.371 23:26:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.371 23:26:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:40.371 23:26:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:40.371 23:26:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.630 23:26:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:40.630 23:26:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:40.630 23:26:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:40.630 23:26:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:40.630 23:26:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:40.630 23:26:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.630 23:26:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:40.630 23:26:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.630 23:26:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:40.630 23:26:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:40.630 23:26:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:40.630 23:26:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:40.630 23:26:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.630 23:26:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:40.630 23:26:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:40.630 23:26:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.630 23:26:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:40.630 23:26:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:40.630 23:26:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:40.630 23:26:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.630 23:26:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:40.630 [2024-09-30 23:26:20.326274] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:40.630 [2024-09-30 23:26:20.326308] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:40.631 [2024-09-30 23:26:20.326358] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:40.631 23:26:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.631 23:26:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:40.631 23:26:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:08:40.631 23:26:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:40.631 23:26:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:08:40.631 23:26:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:08:40.631 23:26:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:08:40.631 23:26:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:40.631 23:26:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:08:40.631 23:26:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:40.631 23:26:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:40.631 23:26:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:40.631 23:26:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:40.631 23:26:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:40.631 23:26:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:40.631 23:26:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:40.631 23:26:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:40.631 23:26:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:40.631 23:26:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.631 23:26:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:40.631 23:26:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.631 23:26:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:40.631 "name": "Existed_Raid", 00:08:40.631 "uuid": "4b221a4b-9c46-4778-aab8-9cf285ed3db2", 00:08:40.631 "strip_size_kb": 64, 00:08:40.631 "state": "offline", 00:08:40.631 "raid_level": "concat", 00:08:40.631 "superblock": true, 00:08:40.631 "num_base_bdevs": 3, 00:08:40.631 "num_base_bdevs_discovered": 2, 00:08:40.631 "num_base_bdevs_operational": 2, 00:08:40.631 "base_bdevs_list": [ 00:08:40.631 { 00:08:40.631 "name": null, 00:08:40.631 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:40.631 "is_configured": false, 00:08:40.631 "data_offset": 0, 00:08:40.631 "data_size": 63488 00:08:40.631 }, 00:08:40.631 { 00:08:40.631 "name": "BaseBdev2", 00:08:40.631 "uuid": "fa98360a-413a-4492-9a4a-aa02e0df0cd9", 00:08:40.631 "is_configured": true, 00:08:40.631 "data_offset": 2048, 00:08:40.631 "data_size": 63488 00:08:40.631 }, 00:08:40.631 { 00:08:40.631 "name": "BaseBdev3", 00:08:40.631 "uuid": "3085fc4c-2ef8-4d6e-91e5-82b5e6d77874", 00:08:40.631 "is_configured": true, 00:08:40.631 "data_offset": 2048, 00:08:40.631 "data_size": 63488 00:08:40.631 } 00:08:40.631 ] 00:08:40.631 }' 00:08:40.631 23:26:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:40.631 23:26:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:41.199 23:26:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:41.199 23:26:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:41.199 23:26:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:41.199 23:26:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:41.199 23:26:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.199 23:26:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:41.199 23:26:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.199 23:26:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:41.199 23:26:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:41.199 23:26:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:41.199 23:26:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.199 23:26:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:41.199 [2024-09-30 23:26:20.821012] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:41.199 23:26:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.199 23:26:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:41.199 23:26:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:41.199 23:26:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:41.199 23:26:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.199 23:26:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:41.199 23:26:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:41.199 23:26:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.199 23:26:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:41.199 23:26:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:41.199 23:26:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:08:41.199 23:26:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.199 23:26:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:41.199 [2024-09-30 23:26:20.891906] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:41.200 [2024-09-30 23:26:20.892035] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline 00:08:41.200 23:26:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.200 23:26:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:41.200 23:26:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:41.200 23:26:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:41.200 23:26:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.200 23:26:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:41.200 23:26:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:41.200 23:26:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.200 23:26:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:41.200 23:26:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:41.200 23:26:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:08:41.200 23:26:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:08:41.200 23:26:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:41.200 23:26:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:41.200 23:26:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.200 23:26:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:41.200 BaseBdev2 00:08:41.200 23:26:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.200 23:26:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:08:41.200 23:26:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:08:41.200 23:26:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:41.200 23:26:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:08:41.200 23:26:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:41.200 23:26:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:41.200 23:26:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:41.200 23:26:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.200 23:26:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:41.200 23:26:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.200 23:26:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:41.200 23:26:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.200 23:26:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:41.200 [ 00:08:41.200 { 00:08:41.200 "name": "BaseBdev2", 00:08:41.200 "aliases": [ 00:08:41.200 "d62f9273-db5d-411f-9d55-143634d24d39" 00:08:41.200 ], 00:08:41.200 "product_name": "Malloc disk", 00:08:41.200 "block_size": 512, 00:08:41.200 "num_blocks": 65536, 00:08:41.200 "uuid": "d62f9273-db5d-411f-9d55-143634d24d39", 00:08:41.200 "assigned_rate_limits": { 00:08:41.200 "rw_ios_per_sec": 0, 00:08:41.200 "rw_mbytes_per_sec": 0, 00:08:41.200 "r_mbytes_per_sec": 0, 00:08:41.200 "w_mbytes_per_sec": 0 00:08:41.200 }, 00:08:41.200 "claimed": false, 00:08:41.200 "zoned": false, 00:08:41.200 "supported_io_types": { 00:08:41.200 "read": true, 00:08:41.200 "write": true, 00:08:41.200 "unmap": true, 00:08:41.200 "flush": true, 00:08:41.200 "reset": true, 00:08:41.200 "nvme_admin": false, 00:08:41.200 "nvme_io": false, 00:08:41.200 "nvme_io_md": false, 00:08:41.200 "write_zeroes": true, 00:08:41.200 "zcopy": true, 00:08:41.200 "get_zone_info": false, 00:08:41.200 "zone_management": false, 00:08:41.200 "zone_append": false, 00:08:41.200 "compare": false, 00:08:41.200 "compare_and_write": false, 00:08:41.200 "abort": true, 00:08:41.200 "seek_hole": false, 00:08:41.200 "seek_data": false, 00:08:41.200 "copy": true, 00:08:41.200 "nvme_iov_md": false 00:08:41.200 }, 00:08:41.200 "memory_domains": [ 00:08:41.200 { 00:08:41.200 "dma_device_id": "system", 00:08:41.200 "dma_device_type": 1 00:08:41.200 }, 00:08:41.200 { 00:08:41.200 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:41.200 "dma_device_type": 2 00:08:41.200 } 00:08:41.200 ], 00:08:41.200 "driver_specific": {} 00:08:41.200 } 00:08:41.200 ] 00:08:41.200 23:26:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.200 23:26:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:08:41.200 23:26:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:41.200 23:26:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:41.200 23:26:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:41.200 23:26:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.200 23:26:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:41.200 BaseBdev3 00:08:41.200 23:26:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.200 23:26:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:08:41.200 23:26:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:08:41.200 23:26:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:41.200 23:26:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:08:41.200 23:26:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:41.200 23:26:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:41.200 23:26:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:41.200 23:26:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.200 23:26:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:41.200 23:26:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.200 23:26:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:41.200 23:26:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.200 23:26:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:41.200 [ 00:08:41.200 { 00:08:41.200 "name": "BaseBdev3", 00:08:41.200 "aliases": [ 00:08:41.200 "c3be721b-b1d9-45f2-89e8-6cdcbb345827" 00:08:41.200 ], 00:08:41.200 "product_name": "Malloc disk", 00:08:41.200 "block_size": 512, 00:08:41.200 "num_blocks": 65536, 00:08:41.200 "uuid": "c3be721b-b1d9-45f2-89e8-6cdcbb345827", 00:08:41.200 "assigned_rate_limits": { 00:08:41.200 "rw_ios_per_sec": 0, 00:08:41.200 "rw_mbytes_per_sec": 0, 00:08:41.200 "r_mbytes_per_sec": 0, 00:08:41.200 "w_mbytes_per_sec": 0 00:08:41.200 }, 00:08:41.200 "claimed": false, 00:08:41.200 "zoned": false, 00:08:41.200 "supported_io_types": { 00:08:41.200 "read": true, 00:08:41.200 "write": true, 00:08:41.200 "unmap": true, 00:08:41.460 "flush": true, 00:08:41.460 "reset": true, 00:08:41.460 "nvme_admin": false, 00:08:41.460 "nvme_io": false, 00:08:41.460 "nvme_io_md": false, 00:08:41.460 "write_zeroes": true, 00:08:41.460 "zcopy": true, 00:08:41.460 "get_zone_info": false, 00:08:41.460 "zone_management": false, 00:08:41.460 "zone_append": false, 00:08:41.460 "compare": false, 00:08:41.460 "compare_and_write": false, 00:08:41.460 "abort": true, 00:08:41.460 "seek_hole": false, 00:08:41.460 "seek_data": false, 00:08:41.460 "copy": true, 00:08:41.460 "nvme_iov_md": false 00:08:41.460 }, 00:08:41.460 "memory_domains": [ 00:08:41.460 { 00:08:41.460 "dma_device_id": "system", 00:08:41.460 "dma_device_type": 1 00:08:41.460 }, 00:08:41.460 { 00:08:41.460 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:41.460 "dma_device_type": 2 00:08:41.460 } 00:08:41.460 ], 00:08:41.460 "driver_specific": {} 00:08:41.460 } 00:08:41.460 ] 00:08:41.460 23:26:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.460 23:26:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:08:41.460 23:26:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:41.460 23:26:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:41.460 23:26:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:41.460 23:26:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.460 23:26:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:41.460 [2024-09-30 23:26:21.066341] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:41.460 [2024-09-30 23:26:21.066465] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:41.460 [2024-09-30 23:26:21.066507] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:41.460 [2024-09-30 23:26:21.068385] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:41.460 23:26:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.460 23:26:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:41.460 23:26:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:41.460 23:26:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:41.460 23:26:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:41.460 23:26:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:41.460 23:26:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:41.460 23:26:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:41.460 23:26:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:41.460 23:26:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:41.460 23:26:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:41.460 23:26:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:41.460 23:26:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:41.460 23:26:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.460 23:26:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:41.460 23:26:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.460 23:26:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:41.460 "name": "Existed_Raid", 00:08:41.460 "uuid": "ed0e7740-9466-43cf-b3f8-93de318c8d9c", 00:08:41.460 "strip_size_kb": 64, 00:08:41.460 "state": "configuring", 00:08:41.460 "raid_level": "concat", 00:08:41.460 "superblock": true, 00:08:41.460 "num_base_bdevs": 3, 00:08:41.460 "num_base_bdevs_discovered": 2, 00:08:41.460 "num_base_bdevs_operational": 3, 00:08:41.460 "base_bdevs_list": [ 00:08:41.460 { 00:08:41.460 "name": "BaseBdev1", 00:08:41.460 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:41.460 "is_configured": false, 00:08:41.460 "data_offset": 0, 00:08:41.460 "data_size": 0 00:08:41.460 }, 00:08:41.460 { 00:08:41.460 "name": "BaseBdev2", 00:08:41.460 "uuid": "d62f9273-db5d-411f-9d55-143634d24d39", 00:08:41.460 "is_configured": true, 00:08:41.460 "data_offset": 2048, 00:08:41.460 "data_size": 63488 00:08:41.461 }, 00:08:41.461 { 00:08:41.461 "name": "BaseBdev3", 00:08:41.461 "uuid": "c3be721b-b1d9-45f2-89e8-6cdcbb345827", 00:08:41.461 "is_configured": true, 00:08:41.461 "data_offset": 2048, 00:08:41.461 "data_size": 63488 00:08:41.461 } 00:08:41.461 ] 00:08:41.461 }' 00:08:41.461 23:26:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:41.461 23:26:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:41.720 23:26:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:08:41.720 23:26:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.720 23:26:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:41.720 [2024-09-30 23:26:21.513547] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:41.720 23:26:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.720 23:26:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:41.720 23:26:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:41.720 23:26:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:41.720 23:26:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:41.720 23:26:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:41.720 23:26:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:41.720 23:26:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:41.720 23:26:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:41.721 23:26:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:41.721 23:26:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:41.721 23:26:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:41.721 23:26:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:41.721 23:26:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.721 23:26:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:41.721 23:26:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.721 23:26:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:41.721 "name": "Existed_Raid", 00:08:41.721 "uuid": "ed0e7740-9466-43cf-b3f8-93de318c8d9c", 00:08:41.721 "strip_size_kb": 64, 00:08:41.721 "state": "configuring", 00:08:41.721 "raid_level": "concat", 00:08:41.721 "superblock": true, 00:08:41.721 "num_base_bdevs": 3, 00:08:41.721 "num_base_bdevs_discovered": 1, 00:08:41.721 "num_base_bdevs_operational": 3, 00:08:41.721 "base_bdevs_list": [ 00:08:41.721 { 00:08:41.721 "name": "BaseBdev1", 00:08:41.721 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:41.721 "is_configured": false, 00:08:41.721 "data_offset": 0, 00:08:41.721 "data_size": 0 00:08:41.721 }, 00:08:41.721 { 00:08:41.721 "name": null, 00:08:41.721 "uuid": "d62f9273-db5d-411f-9d55-143634d24d39", 00:08:41.721 "is_configured": false, 00:08:41.721 "data_offset": 0, 00:08:41.721 "data_size": 63488 00:08:41.721 }, 00:08:41.721 { 00:08:41.721 "name": "BaseBdev3", 00:08:41.721 "uuid": "c3be721b-b1d9-45f2-89e8-6cdcbb345827", 00:08:41.721 "is_configured": true, 00:08:41.721 "data_offset": 2048, 00:08:41.721 "data_size": 63488 00:08:41.721 } 00:08:41.721 ] 00:08:41.721 }' 00:08:41.721 23:26:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:41.721 23:26:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:42.289 23:26:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:42.289 23:26:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:42.290 23:26:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:42.290 23:26:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:42.290 23:26:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:42.290 23:26:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:08:42.290 23:26:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:42.290 23:26:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:42.290 23:26:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:42.290 [2024-09-30 23:26:22.011679] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:42.290 BaseBdev1 00:08:42.290 23:26:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:42.290 23:26:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:08:42.290 23:26:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:08:42.290 23:26:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:42.290 23:26:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:08:42.290 23:26:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:42.290 23:26:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:42.290 23:26:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:42.290 23:26:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:42.290 23:26:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:42.290 23:26:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:42.290 23:26:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:42.290 23:26:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:42.290 23:26:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:42.290 [ 00:08:42.290 { 00:08:42.290 "name": "BaseBdev1", 00:08:42.290 "aliases": [ 00:08:42.290 "29333003-995e-4b2d-b22c-1df97e80cc0b" 00:08:42.290 ], 00:08:42.290 "product_name": "Malloc disk", 00:08:42.290 "block_size": 512, 00:08:42.290 "num_blocks": 65536, 00:08:42.290 "uuid": "29333003-995e-4b2d-b22c-1df97e80cc0b", 00:08:42.290 "assigned_rate_limits": { 00:08:42.290 "rw_ios_per_sec": 0, 00:08:42.290 "rw_mbytes_per_sec": 0, 00:08:42.290 "r_mbytes_per_sec": 0, 00:08:42.290 "w_mbytes_per_sec": 0 00:08:42.290 }, 00:08:42.290 "claimed": true, 00:08:42.290 "claim_type": "exclusive_write", 00:08:42.290 "zoned": false, 00:08:42.290 "supported_io_types": { 00:08:42.290 "read": true, 00:08:42.290 "write": true, 00:08:42.290 "unmap": true, 00:08:42.290 "flush": true, 00:08:42.290 "reset": true, 00:08:42.290 "nvme_admin": false, 00:08:42.290 "nvme_io": false, 00:08:42.290 "nvme_io_md": false, 00:08:42.290 "write_zeroes": true, 00:08:42.290 "zcopy": true, 00:08:42.290 "get_zone_info": false, 00:08:42.290 "zone_management": false, 00:08:42.290 "zone_append": false, 00:08:42.290 "compare": false, 00:08:42.290 "compare_and_write": false, 00:08:42.290 "abort": true, 00:08:42.290 "seek_hole": false, 00:08:42.290 "seek_data": false, 00:08:42.290 "copy": true, 00:08:42.290 "nvme_iov_md": false 00:08:42.290 }, 00:08:42.290 "memory_domains": [ 00:08:42.290 { 00:08:42.290 "dma_device_id": "system", 00:08:42.290 "dma_device_type": 1 00:08:42.290 }, 00:08:42.290 { 00:08:42.290 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:42.290 "dma_device_type": 2 00:08:42.290 } 00:08:42.290 ], 00:08:42.290 "driver_specific": {} 00:08:42.290 } 00:08:42.290 ] 00:08:42.290 23:26:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:42.290 23:26:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:08:42.290 23:26:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:42.290 23:26:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:42.290 23:26:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:42.290 23:26:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:42.290 23:26:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:42.290 23:26:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:42.290 23:26:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:42.290 23:26:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:42.290 23:26:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:42.290 23:26:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:42.290 23:26:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:42.290 23:26:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:42.290 23:26:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:42.290 23:26:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:42.290 23:26:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:42.290 23:26:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:42.290 "name": "Existed_Raid", 00:08:42.290 "uuid": "ed0e7740-9466-43cf-b3f8-93de318c8d9c", 00:08:42.290 "strip_size_kb": 64, 00:08:42.290 "state": "configuring", 00:08:42.290 "raid_level": "concat", 00:08:42.290 "superblock": true, 00:08:42.290 "num_base_bdevs": 3, 00:08:42.290 "num_base_bdevs_discovered": 2, 00:08:42.290 "num_base_bdevs_operational": 3, 00:08:42.290 "base_bdevs_list": [ 00:08:42.290 { 00:08:42.290 "name": "BaseBdev1", 00:08:42.290 "uuid": "29333003-995e-4b2d-b22c-1df97e80cc0b", 00:08:42.290 "is_configured": true, 00:08:42.290 "data_offset": 2048, 00:08:42.290 "data_size": 63488 00:08:42.290 }, 00:08:42.290 { 00:08:42.290 "name": null, 00:08:42.290 "uuid": "d62f9273-db5d-411f-9d55-143634d24d39", 00:08:42.290 "is_configured": false, 00:08:42.290 "data_offset": 0, 00:08:42.290 "data_size": 63488 00:08:42.290 }, 00:08:42.290 { 00:08:42.290 "name": "BaseBdev3", 00:08:42.290 "uuid": "c3be721b-b1d9-45f2-89e8-6cdcbb345827", 00:08:42.290 "is_configured": true, 00:08:42.290 "data_offset": 2048, 00:08:42.290 "data_size": 63488 00:08:42.290 } 00:08:42.290 ] 00:08:42.290 }' 00:08:42.290 23:26:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:42.290 23:26:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:42.858 23:26:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:42.858 23:26:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:42.858 23:26:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:42.858 23:26:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:42.858 23:26:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:42.858 23:26:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:08:42.858 23:26:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:08:42.858 23:26:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:42.858 23:26:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:42.858 [2024-09-30 23:26:22.470950] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:42.858 23:26:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:42.858 23:26:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:42.858 23:26:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:42.859 23:26:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:42.859 23:26:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:42.859 23:26:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:42.859 23:26:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:42.859 23:26:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:42.859 23:26:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:42.859 23:26:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:42.859 23:26:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:42.859 23:26:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:42.859 23:26:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:42.859 23:26:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:42.859 23:26:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:42.859 23:26:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:42.859 23:26:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:42.859 "name": "Existed_Raid", 00:08:42.859 "uuid": "ed0e7740-9466-43cf-b3f8-93de318c8d9c", 00:08:42.859 "strip_size_kb": 64, 00:08:42.859 "state": "configuring", 00:08:42.859 "raid_level": "concat", 00:08:42.859 "superblock": true, 00:08:42.859 "num_base_bdevs": 3, 00:08:42.859 "num_base_bdevs_discovered": 1, 00:08:42.859 "num_base_bdevs_operational": 3, 00:08:42.859 "base_bdevs_list": [ 00:08:42.859 { 00:08:42.859 "name": "BaseBdev1", 00:08:42.859 "uuid": "29333003-995e-4b2d-b22c-1df97e80cc0b", 00:08:42.859 "is_configured": true, 00:08:42.859 "data_offset": 2048, 00:08:42.859 "data_size": 63488 00:08:42.859 }, 00:08:42.859 { 00:08:42.859 "name": null, 00:08:42.859 "uuid": "d62f9273-db5d-411f-9d55-143634d24d39", 00:08:42.859 "is_configured": false, 00:08:42.859 "data_offset": 0, 00:08:42.859 "data_size": 63488 00:08:42.859 }, 00:08:42.859 { 00:08:42.859 "name": null, 00:08:42.859 "uuid": "c3be721b-b1d9-45f2-89e8-6cdcbb345827", 00:08:42.859 "is_configured": false, 00:08:42.859 "data_offset": 0, 00:08:42.859 "data_size": 63488 00:08:42.859 } 00:08:42.859 ] 00:08:42.859 }' 00:08:42.859 23:26:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:42.859 23:26:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:43.118 23:26:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:43.118 23:26:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:43.118 23:26:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:43.118 23:26:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:43.118 23:26:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:43.378 23:26:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:08:43.378 23:26:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:08:43.378 23:26:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:43.378 23:26:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:43.378 [2024-09-30 23:26:22.982115] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:43.378 23:26:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:43.378 23:26:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:43.378 23:26:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:43.378 23:26:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:43.378 23:26:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:43.378 23:26:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:43.378 23:26:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:43.378 23:26:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:43.378 23:26:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:43.378 23:26:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:43.378 23:26:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:43.378 23:26:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:43.378 23:26:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:43.378 23:26:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:43.378 23:26:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:43.378 23:26:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:43.378 23:26:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:43.378 "name": "Existed_Raid", 00:08:43.378 "uuid": "ed0e7740-9466-43cf-b3f8-93de318c8d9c", 00:08:43.378 "strip_size_kb": 64, 00:08:43.378 "state": "configuring", 00:08:43.378 "raid_level": "concat", 00:08:43.378 "superblock": true, 00:08:43.378 "num_base_bdevs": 3, 00:08:43.378 "num_base_bdevs_discovered": 2, 00:08:43.378 "num_base_bdevs_operational": 3, 00:08:43.378 "base_bdevs_list": [ 00:08:43.378 { 00:08:43.378 "name": "BaseBdev1", 00:08:43.378 "uuid": "29333003-995e-4b2d-b22c-1df97e80cc0b", 00:08:43.378 "is_configured": true, 00:08:43.378 "data_offset": 2048, 00:08:43.378 "data_size": 63488 00:08:43.378 }, 00:08:43.378 { 00:08:43.378 "name": null, 00:08:43.378 "uuid": "d62f9273-db5d-411f-9d55-143634d24d39", 00:08:43.378 "is_configured": false, 00:08:43.378 "data_offset": 0, 00:08:43.378 "data_size": 63488 00:08:43.378 }, 00:08:43.378 { 00:08:43.378 "name": "BaseBdev3", 00:08:43.378 "uuid": "c3be721b-b1d9-45f2-89e8-6cdcbb345827", 00:08:43.378 "is_configured": true, 00:08:43.378 "data_offset": 2048, 00:08:43.378 "data_size": 63488 00:08:43.378 } 00:08:43.378 ] 00:08:43.378 }' 00:08:43.378 23:26:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:43.378 23:26:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:43.637 23:26:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:43.637 23:26:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:43.637 23:26:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:43.637 23:26:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:43.637 23:26:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:43.637 23:26:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:08:43.637 23:26:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:43.637 23:26:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:43.637 23:26:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:43.637 [2024-09-30 23:26:23.481255] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:43.896 23:26:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:43.896 23:26:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:43.896 23:26:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:43.896 23:26:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:43.896 23:26:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:43.896 23:26:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:43.897 23:26:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:43.897 23:26:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:43.897 23:26:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:43.897 23:26:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:43.897 23:26:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:43.897 23:26:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:43.897 23:26:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:43.897 23:26:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:43.897 23:26:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:43.897 23:26:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:43.897 23:26:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:43.897 "name": "Existed_Raid", 00:08:43.897 "uuid": "ed0e7740-9466-43cf-b3f8-93de318c8d9c", 00:08:43.897 "strip_size_kb": 64, 00:08:43.897 "state": "configuring", 00:08:43.897 "raid_level": "concat", 00:08:43.897 "superblock": true, 00:08:43.897 "num_base_bdevs": 3, 00:08:43.897 "num_base_bdevs_discovered": 1, 00:08:43.897 "num_base_bdevs_operational": 3, 00:08:43.897 "base_bdevs_list": [ 00:08:43.897 { 00:08:43.897 "name": null, 00:08:43.897 "uuid": "29333003-995e-4b2d-b22c-1df97e80cc0b", 00:08:43.897 "is_configured": false, 00:08:43.897 "data_offset": 0, 00:08:43.897 "data_size": 63488 00:08:43.897 }, 00:08:43.897 { 00:08:43.897 "name": null, 00:08:43.897 "uuid": "d62f9273-db5d-411f-9d55-143634d24d39", 00:08:43.897 "is_configured": false, 00:08:43.897 "data_offset": 0, 00:08:43.897 "data_size": 63488 00:08:43.897 }, 00:08:43.897 { 00:08:43.897 "name": "BaseBdev3", 00:08:43.897 "uuid": "c3be721b-b1d9-45f2-89e8-6cdcbb345827", 00:08:43.897 "is_configured": true, 00:08:43.897 "data_offset": 2048, 00:08:43.897 "data_size": 63488 00:08:43.897 } 00:08:43.897 ] 00:08:43.897 }' 00:08:43.897 23:26:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:43.897 23:26:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:44.155 23:26:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:44.155 23:26:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:44.155 23:26:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:44.155 23:26:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:44.155 23:26:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:44.414 23:26:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:08:44.414 23:26:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:08:44.414 23:26:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:44.414 23:26:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:44.414 [2024-09-30 23:26:24.014971] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:44.414 23:26:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:44.414 23:26:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:44.414 23:26:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:44.414 23:26:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:44.414 23:26:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:44.415 23:26:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:44.415 23:26:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:44.415 23:26:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:44.415 23:26:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:44.415 23:26:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:44.415 23:26:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:44.415 23:26:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:44.415 23:26:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:44.415 23:26:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:44.415 23:26:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:44.415 23:26:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:44.415 23:26:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:44.415 "name": "Existed_Raid", 00:08:44.415 "uuid": "ed0e7740-9466-43cf-b3f8-93de318c8d9c", 00:08:44.415 "strip_size_kb": 64, 00:08:44.415 "state": "configuring", 00:08:44.415 "raid_level": "concat", 00:08:44.415 "superblock": true, 00:08:44.415 "num_base_bdevs": 3, 00:08:44.415 "num_base_bdevs_discovered": 2, 00:08:44.415 "num_base_bdevs_operational": 3, 00:08:44.415 "base_bdevs_list": [ 00:08:44.415 { 00:08:44.415 "name": null, 00:08:44.415 "uuid": "29333003-995e-4b2d-b22c-1df97e80cc0b", 00:08:44.415 "is_configured": false, 00:08:44.415 "data_offset": 0, 00:08:44.415 "data_size": 63488 00:08:44.415 }, 00:08:44.415 { 00:08:44.415 "name": "BaseBdev2", 00:08:44.415 "uuid": "d62f9273-db5d-411f-9d55-143634d24d39", 00:08:44.415 "is_configured": true, 00:08:44.415 "data_offset": 2048, 00:08:44.415 "data_size": 63488 00:08:44.415 }, 00:08:44.415 { 00:08:44.415 "name": "BaseBdev3", 00:08:44.415 "uuid": "c3be721b-b1d9-45f2-89e8-6cdcbb345827", 00:08:44.415 "is_configured": true, 00:08:44.415 "data_offset": 2048, 00:08:44.415 "data_size": 63488 00:08:44.415 } 00:08:44.415 ] 00:08:44.415 }' 00:08:44.415 23:26:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:44.415 23:26:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:44.675 23:26:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:44.675 23:26:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:44.675 23:26:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:44.675 23:26:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:44.675 23:26:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:44.675 23:26:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:08:44.675 23:26:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:44.675 23:26:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:44.675 23:26:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:44.675 23:26:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:08:44.675 23:26:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:44.934 23:26:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 29333003-995e-4b2d-b22c-1df97e80cc0b 00:08:44.934 23:26:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:44.934 23:26:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:44.934 [2024-09-30 23:26:24.561201] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:08:44.934 [2024-09-30 23:26:24.561478] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:08:44.934 [2024-09-30 23:26:24.561530] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:44.934 NewBaseBdev 00:08:44.934 [2024-09-30 23:26:24.561823] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:08:44.934 [2024-09-30 23:26:24.561956] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:08:44.934 [2024-09-30 23:26:24.561974] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006d00 00:08:44.934 [2024-09-30 23:26:24.562081] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:44.934 23:26:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:44.934 23:26:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:08:44.934 23:26:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:08:44.934 23:26:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:44.934 23:26:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:08:44.934 23:26:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:44.934 23:26:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:44.934 23:26:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:44.934 23:26:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:44.934 23:26:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:44.934 23:26:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:44.934 23:26:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:08:44.934 23:26:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:44.934 23:26:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:44.934 [ 00:08:44.934 { 00:08:44.934 "name": "NewBaseBdev", 00:08:44.934 "aliases": [ 00:08:44.934 "29333003-995e-4b2d-b22c-1df97e80cc0b" 00:08:44.934 ], 00:08:44.934 "product_name": "Malloc disk", 00:08:44.934 "block_size": 512, 00:08:44.934 "num_blocks": 65536, 00:08:44.934 "uuid": "29333003-995e-4b2d-b22c-1df97e80cc0b", 00:08:44.934 "assigned_rate_limits": { 00:08:44.934 "rw_ios_per_sec": 0, 00:08:44.934 "rw_mbytes_per_sec": 0, 00:08:44.934 "r_mbytes_per_sec": 0, 00:08:44.934 "w_mbytes_per_sec": 0 00:08:44.934 }, 00:08:44.934 "claimed": true, 00:08:44.934 "claim_type": "exclusive_write", 00:08:44.934 "zoned": false, 00:08:44.934 "supported_io_types": { 00:08:44.934 "read": true, 00:08:44.934 "write": true, 00:08:44.934 "unmap": true, 00:08:44.934 "flush": true, 00:08:44.934 "reset": true, 00:08:44.934 "nvme_admin": false, 00:08:44.934 "nvme_io": false, 00:08:44.935 "nvme_io_md": false, 00:08:44.935 "write_zeroes": true, 00:08:44.935 "zcopy": true, 00:08:44.935 "get_zone_info": false, 00:08:44.935 "zone_management": false, 00:08:44.935 "zone_append": false, 00:08:44.935 "compare": false, 00:08:44.935 "compare_and_write": false, 00:08:44.935 "abort": true, 00:08:44.935 "seek_hole": false, 00:08:44.935 "seek_data": false, 00:08:44.935 "copy": true, 00:08:44.935 "nvme_iov_md": false 00:08:44.935 }, 00:08:44.935 "memory_domains": [ 00:08:44.935 { 00:08:44.935 "dma_device_id": "system", 00:08:44.935 "dma_device_type": 1 00:08:44.935 }, 00:08:44.935 { 00:08:44.935 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:44.935 "dma_device_type": 2 00:08:44.935 } 00:08:44.935 ], 00:08:44.935 "driver_specific": {} 00:08:44.935 } 00:08:44.935 ] 00:08:44.935 23:26:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:44.935 23:26:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:08:44.935 23:26:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:08:44.935 23:26:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:44.935 23:26:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:44.935 23:26:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:44.935 23:26:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:44.935 23:26:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:44.935 23:26:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:44.935 23:26:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:44.935 23:26:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:44.935 23:26:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:44.935 23:26:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:44.935 23:26:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:44.935 23:26:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:44.935 23:26:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:44.935 23:26:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:44.935 23:26:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:44.935 "name": "Existed_Raid", 00:08:44.935 "uuid": "ed0e7740-9466-43cf-b3f8-93de318c8d9c", 00:08:44.935 "strip_size_kb": 64, 00:08:44.935 "state": "online", 00:08:44.935 "raid_level": "concat", 00:08:44.935 "superblock": true, 00:08:44.935 "num_base_bdevs": 3, 00:08:44.935 "num_base_bdevs_discovered": 3, 00:08:44.935 "num_base_bdevs_operational": 3, 00:08:44.935 "base_bdevs_list": [ 00:08:44.935 { 00:08:44.935 "name": "NewBaseBdev", 00:08:44.935 "uuid": "29333003-995e-4b2d-b22c-1df97e80cc0b", 00:08:44.935 "is_configured": true, 00:08:44.935 "data_offset": 2048, 00:08:44.935 "data_size": 63488 00:08:44.935 }, 00:08:44.935 { 00:08:44.935 "name": "BaseBdev2", 00:08:44.935 "uuid": "d62f9273-db5d-411f-9d55-143634d24d39", 00:08:44.935 "is_configured": true, 00:08:44.935 "data_offset": 2048, 00:08:44.935 "data_size": 63488 00:08:44.935 }, 00:08:44.935 { 00:08:44.935 "name": "BaseBdev3", 00:08:44.935 "uuid": "c3be721b-b1d9-45f2-89e8-6cdcbb345827", 00:08:44.935 "is_configured": true, 00:08:44.935 "data_offset": 2048, 00:08:44.935 "data_size": 63488 00:08:44.935 } 00:08:44.935 ] 00:08:44.935 }' 00:08:44.935 23:26:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:44.935 23:26:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:45.504 23:26:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:08:45.504 23:26:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:45.504 23:26:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:45.504 23:26:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:45.504 23:26:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:08:45.504 23:26:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:45.504 23:26:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:45.504 23:26:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:45.504 23:26:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:45.504 23:26:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:45.504 [2024-09-30 23:26:25.076647] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:45.504 23:26:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:45.504 23:26:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:45.504 "name": "Existed_Raid", 00:08:45.504 "aliases": [ 00:08:45.504 "ed0e7740-9466-43cf-b3f8-93de318c8d9c" 00:08:45.504 ], 00:08:45.504 "product_name": "Raid Volume", 00:08:45.504 "block_size": 512, 00:08:45.504 "num_blocks": 190464, 00:08:45.504 "uuid": "ed0e7740-9466-43cf-b3f8-93de318c8d9c", 00:08:45.504 "assigned_rate_limits": { 00:08:45.504 "rw_ios_per_sec": 0, 00:08:45.504 "rw_mbytes_per_sec": 0, 00:08:45.504 "r_mbytes_per_sec": 0, 00:08:45.504 "w_mbytes_per_sec": 0 00:08:45.504 }, 00:08:45.504 "claimed": false, 00:08:45.504 "zoned": false, 00:08:45.504 "supported_io_types": { 00:08:45.504 "read": true, 00:08:45.504 "write": true, 00:08:45.504 "unmap": true, 00:08:45.504 "flush": true, 00:08:45.504 "reset": true, 00:08:45.504 "nvme_admin": false, 00:08:45.504 "nvme_io": false, 00:08:45.504 "nvme_io_md": false, 00:08:45.504 "write_zeroes": true, 00:08:45.504 "zcopy": false, 00:08:45.504 "get_zone_info": false, 00:08:45.504 "zone_management": false, 00:08:45.504 "zone_append": false, 00:08:45.504 "compare": false, 00:08:45.504 "compare_and_write": false, 00:08:45.504 "abort": false, 00:08:45.504 "seek_hole": false, 00:08:45.504 "seek_data": false, 00:08:45.504 "copy": false, 00:08:45.504 "nvme_iov_md": false 00:08:45.504 }, 00:08:45.504 "memory_domains": [ 00:08:45.504 { 00:08:45.504 "dma_device_id": "system", 00:08:45.504 "dma_device_type": 1 00:08:45.504 }, 00:08:45.505 { 00:08:45.505 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:45.505 "dma_device_type": 2 00:08:45.505 }, 00:08:45.505 { 00:08:45.505 "dma_device_id": "system", 00:08:45.505 "dma_device_type": 1 00:08:45.505 }, 00:08:45.505 { 00:08:45.505 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:45.505 "dma_device_type": 2 00:08:45.505 }, 00:08:45.505 { 00:08:45.505 "dma_device_id": "system", 00:08:45.505 "dma_device_type": 1 00:08:45.505 }, 00:08:45.505 { 00:08:45.505 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:45.505 "dma_device_type": 2 00:08:45.505 } 00:08:45.505 ], 00:08:45.505 "driver_specific": { 00:08:45.505 "raid": { 00:08:45.505 "uuid": "ed0e7740-9466-43cf-b3f8-93de318c8d9c", 00:08:45.505 "strip_size_kb": 64, 00:08:45.505 "state": "online", 00:08:45.505 "raid_level": "concat", 00:08:45.505 "superblock": true, 00:08:45.505 "num_base_bdevs": 3, 00:08:45.505 "num_base_bdevs_discovered": 3, 00:08:45.505 "num_base_bdevs_operational": 3, 00:08:45.505 "base_bdevs_list": [ 00:08:45.505 { 00:08:45.505 "name": "NewBaseBdev", 00:08:45.505 "uuid": "29333003-995e-4b2d-b22c-1df97e80cc0b", 00:08:45.505 "is_configured": true, 00:08:45.505 "data_offset": 2048, 00:08:45.505 "data_size": 63488 00:08:45.505 }, 00:08:45.505 { 00:08:45.505 "name": "BaseBdev2", 00:08:45.505 "uuid": "d62f9273-db5d-411f-9d55-143634d24d39", 00:08:45.505 "is_configured": true, 00:08:45.505 "data_offset": 2048, 00:08:45.505 "data_size": 63488 00:08:45.505 }, 00:08:45.505 { 00:08:45.505 "name": "BaseBdev3", 00:08:45.505 "uuid": "c3be721b-b1d9-45f2-89e8-6cdcbb345827", 00:08:45.505 "is_configured": true, 00:08:45.505 "data_offset": 2048, 00:08:45.505 "data_size": 63488 00:08:45.505 } 00:08:45.505 ] 00:08:45.505 } 00:08:45.505 } 00:08:45.505 }' 00:08:45.505 23:26:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:45.505 23:26:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:08:45.505 BaseBdev2 00:08:45.505 BaseBdev3' 00:08:45.505 23:26:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:45.505 23:26:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:45.505 23:26:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:45.505 23:26:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:45.505 23:26:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:08:45.505 23:26:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:45.505 23:26:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:45.505 23:26:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:45.505 23:26:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:45.505 23:26:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:45.505 23:26:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:45.505 23:26:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:45.505 23:26:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:45.505 23:26:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:45.505 23:26:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:45.505 23:26:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:45.505 23:26:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:45.505 23:26:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:45.505 23:26:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:45.505 23:26:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:45.505 23:26:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:45.505 23:26:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:45.505 23:26:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:45.505 23:26:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:45.505 23:26:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:45.505 23:26:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:45.505 23:26:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:45.505 23:26:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:45.505 23:26:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:45.505 [2024-09-30 23:26:25.339915] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:45.505 [2024-09-30 23:26:25.339992] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:45.505 [2024-09-30 23:26:25.340072] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:45.505 [2024-09-30 23:26:25.340130] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:45.505 [2024-09-30 23:26:25.340149] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name Existed_Raid, state offline 00:08:45.505 23:26:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:45.505 23:26:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 77395 00:08:45.505 23:26:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 77395 ']' 00:08:45.505 23:26:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 77395 00:08:45.505 23:26:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:08:45.505 23:26:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:45.505 23:26:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 77395 00:08:45.764 23:26:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:45.764 23:26:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:45.764 23:26:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 77395' 00:08:45.764 killing process with pid 77395 00:08:45.764 23:26:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 77395 00:08:45.764 [2024-09-30 23:26:25.390839] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:45.764 23:26:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 77395 00:08:45.764 [2024-09-30 23:26:25.421431] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:46.031 23:26:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:08:46.031 00:08:46.031 real 0m8.873s 00:08:46.031 user 0m15.134s 00:08:46.031 sys 0m1.807s 00:08:46.031 ************************************ 00:08:46.031 END TEST raid_state_function_test_sb 00:08:46.031 ************************************ 00:08:46.031 23:26:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:46.031 23:26:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:46.031 23:26:25 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 3 00:08:46.031 23:26:25 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:08:46.031 23:26:25 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:46.031 23:26:25 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:46.031 ************************************ 00:08:46.031 START TEST raid_superblock_test 00:08:46.031 ************************************ 00:08:46.031 23:26:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test concat 3 00:08:46.031 23:26:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:08:46.031 23:26:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:08:46.031 23:26:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:08:46.031 23:26:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:08:46.031 23:26:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:08:46.031 23:26:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:08:46.031 23:26:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:08:46.031 23:26:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:08:46.031 23:26:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:08:46.031 23:26:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:08:46.031 23:26:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:08:46.031 23:26:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:08:46.031 23:26:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:08:46.031 23:26:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:08:46.031 23:26:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:08:46.031 23:26:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:08:46.031 23:26:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=78003 00:08:46.031 23:26:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:08:46.031 23:26:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 78003 00:08:46.031 23:26:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 78003 ']' 00:08:46.031 23:26:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:46.031 23:26:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:46.031 23:26:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:46.031 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:46.031 23:26:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:46.031 23:26:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.031 [2024-09-30 23:26:25.829906] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:08:46.031 [2024-09-30 23:26:25.830089] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78003 ] 00:08:46.307 [2024-09-30 23:26:25.993821] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:46.307 [2024-09-30 23:26:26.038716] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:46.307 [2024-09-30 23:26:26.080378] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:46.307 [2024-09-30 23:26:26.080421] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:46.874 23:26:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:46.874 23:26:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:08:46.874 23:26:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:08:46.874 23:26:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:46.874 23:26:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:08:46.874 23:26:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:08:46.874 23:26:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:08:46.874 23:26:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:46.874 23:26:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:46.874 23:26:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:46.874 23:26:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:08:46.874 23:26:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:46.874 23:26:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.874 malloc1 00:08:46.874 23:26:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:46.874 23:26:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:46.874 23:26:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:46.874 23:26:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.874 [2024-09-30 23:26:26.686304] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:46.874 [2024-09-30 23:26:26.686459] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:46.874 [2024-09-30 23:26:26.686501] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:08:46.874 [2024-09-30 23:26:26.686545] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:46.874 [2024-09-30 23:26:26.688689] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:46.874 [2024-09-30 23:26:26.688772] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:46.874 pt1 00:08:46.874 23:26:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:46.874 23:26:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:46.874 23:26:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:46.874 23:26:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:08:46.874 23:26:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:08:46.874 23:26:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:08:46.874 23:26:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:46.874 23:26:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:46.874 23:26:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:46.874 23:26:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:08:46.874 23:26:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:46.874 23:26:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.874 malloc2 00:08:46.874 23:26:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:46.874 23:26:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:46.874 23:26:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:46.874 23:26:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.134 [2024-09-30 23:26:26.728788] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:47.134 [2024-09-30 23:26:26.728931] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:47.134 [2024-09-30 23:26:26.728957] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:08:47.134 [2024-09-30 23:26:26.728970] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:47.134 [2024-09-30 23:26:26.731475] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:47.134 [2024-09-30 23:26:26.731512] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:47.134 pt2 00:08:47.134 23:26:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:47.134 23:26:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:47.134 23:26:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:47.134 23:26:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:08:47.134 23:26:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:08:47.134 23:26:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:08:47.134 23:26:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:47.134 23:26:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:47.134 23:26:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:47.134 23:26:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:08:47.134 23:26:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:47.134 23:26:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.134 malloc3 00:08:47.134 23:26:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:47.134 23:26:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:08:47.134 23:26:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:47.134 23:26:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.134 [2024-09-30 23:26:26.757299] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:08:47.134 [2024-09-30 23:26:26.757428] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:47.134 [2024-09-30 23:26:26.757462] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:08:47.134 [2024-09-30 23:26:26.757492] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:47.134 [2024-09-30 23:26:26.759506] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:47.134 [2024-09-30 23:26:26.759581] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:08:47.134 pt3 00:08:47.134 23:26:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:47.134 23:26:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:47.134 23:26:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:47.134 23:26:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:08:47.134 23:26:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:47.134 23:26:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.134 [2024-09-30 23:26:26.769336] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:47.134 [2024-09-30 23:26:26.771180] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:47.134 [2024-09-30 23:26:26.771286] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:08:47.134 [2024-09-30 23:26:26.771458] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:08:47.134 [2024-09-30 23:26:26.771502] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:47.134 [2024-09-30 23:26:26.771766] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:08:47.134 [2024-09-30 23:26:26.771941] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:08:47.134 [2024-09-30 23:26:26.771989] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:08:47.134 [2024-09-30 23:26:26.772138] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:47.134 23:26:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:47.134 23:26:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:08:47.134 23:26:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:47.134 23:26:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:47.134 23:26:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:47.134 23:26:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:47.134 23:26:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:47.134 23:26:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:47.134 23:26:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:47.134 23:26:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:47.134 23:26:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:47.134 23:26:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:47.134 23:26:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:47.134 23:26:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:47.134 23:26:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.134 23:26:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:47.134 23:26:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:47.134 "name": "raid_bdev1", 00:08:47.134 "uuid": "843220cf-9490-40a8-a9b4-86a77388610d", 00:08:47.134 "strip_size_kb": 64, 00:08:47.134 "state": "online", 00:08:47.134 "raid_level": "concat", 00:08:47.134 "superblock": true, 00:08:47.134 "num_base_bdevs": 3, 00:08:47.134 "num_base_bdevs_discovered": 3, 00:08:47.134 "num_base_bdevs_operational": 3, 00:08:47.134 "base_bdevs_list": [ 00:08:47.134 { 00:08:47.134 "name": "pt1", 00:08:47.134 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:47.134 "is_configured": true, 00:08:47.134 "data_offset": 2048, 00:08:47.134 "data_size": 63488 00:08:47.134 }, 00:08:47.134 { 00:08:47.134 "name": "pt2", 00:08:47.134 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:47.134 "is_configured": true, 00:08:47.134 "data_offset": 2048, 00:08:47.134 "data_size": 63488 00:08:47.134 }, 00:08:47.134 { 00:08:47.134 "name": "pt3", 00:08:47.134 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:47.134 "is_configured": true, 00:08:47.134 "data_offset": 2048, 00:08:47.134 "data_size": 63488 00:08:47.134 } 00:08:47.134 ] 00:08:47.134 }' 00:08:47.134 23:26:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:47.134 23:26:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.392 23:26:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:08:47.392 23:26:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:08:47.392 23:26:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:47.392 23:26:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:47.392 23:26:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:47.392 23:26:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:47.392 23:26:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:47.392 23:26:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:47.392 23:26:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:47.392 23:26:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.392 [2024-09-30 23:26:27.188859] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:47.392 23:26:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:47.392 23:26:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:47.392 "name": "raid_bdev1", 00:08:47.392 "aliases": [ 00:08:47.392 "843220cf-9490-40a8-a9b4-86a77388610d" 00:08:47.392 ], 00:08:47.392 "product_name": "Raid Volume", 00:08:47.392 "block_size": 512, 00:08:47.392 "num_blocks": 190464, 00:08:47.392 "uuid": "843220cf-9490-40a8-a9b4-86a77388610d", 00:08:47.392 "assigned_rate_limits": { 00:08:47.392 "rw_ios_per_sec": 0, 00:08:47.392 "rw_mbytes_per_sec": 0, 00:08:47.392 "r_mbytes_per_sec": 0, 00:08:47.392 "w_mbytes_per_sec": 0 00:08:47.392 }, 00:08:47.392 "claimed": false, 00:08:47.392 "zoned": false, 00:08:47.392 "supported_io_types": { 00:08:47.392 "read": true, 00:08:47.392 "write": true, 00:08:47.392 "unmap": true, 00:08:47.392 "flush": true, 00:08:47.392 "reset": true, 00:08:47.392 "nvme_admin": false, 00:08:47.392 "nvme_io": false, 00:08:47.392 "nvme_io_md": false, 00:08:47.392 "write_zeroes": true, 00:08:47.392 "zcopy": false, 00:08:47.392 "get_zone_info": false, 00:08:47.392 "zone_management": false, 00:08:47.392 "zone_append": false, 00:08:47.392 "compare": false, 00:08:47.392 "compare_and_write": false, 00:08:47.392 "abort": false, 00:08:47.392 "seek_hole": false, 00:08:47.392 "seek_data": false, 00:08:47.392 "copy": false, 00:08:47.392 "nvme_iov_md": false 00:08:47.392 }, 00:08:47.392 "memory_domains": [ 00:08:47.392 { 00:08:47.392 "dma_device_id": "system", 00:08:47.392 "dma_device_type": 1 00:08:47.392 }, 00:08:47.392 { 00:08:47.392 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:47.392 "dma_device_type": 2 00:08:47.392 }, 00:08:47.392 { 00:08:47.392 "dma_device_id": "system", 00:08:47.392 "dma_device_type": 1 00:08:47.392 }, 00:08:47.392 { 00:08:47.392 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:47.392 "dma_device_type": 2 00:08:47.392 }, 00:08:47.392 { 00:08:47.392 "dma_device_id": "system", 00:08:47.392 "dma_device_type": 1 00:08:47.392 }, 00:08:47.393 { 00:08:47.393 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:47.393 "dma_device_type": 2 00:08:47.393 } 00:08:47.393 ], 00:08:47.393 "driver_specific": { 00:08:47.393 "raid": { 00:08:47.393 "uuid": "843220cf-9490-40a8-a9b4-86a77388610d", 00:08:47.393 "strip_size_kb": 64, 00:08:47.393 "state": "online", 00:08:47.393 "raid_level": "concat", 00:08:47.393 "superblock": true, 00:08:47.393 "num_base_bdevs": 3, 00:08:47.393 "num_base_bdevs_discovered": 3, 00:08:47.393 "num_base_bdevs_operational": 3, 00:08:47.393 "base_bdevs_list": [ 00:08:47.393 { 00:08:47.393 "name": "pt1", 00:08:47.393 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:47.393 "is_configured": true, 00:08:47.393 "data_offset": 2048, 00:08:47.393 "data_size": 63488 00:08:47.393 }, 00:08:47.393 { 00:08:47.393 "name": "pt2", 00:08:47.393 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:47.393 "is_configured": true, 00:08:47.393 "data_offset": 2048, 00:08:47.393 "data_size": 63488 00:08:47.393 }, 00:08:47.393 { 00:08:47.393 "name": "pt3", 00:08:47.393 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:47.393 "is_configured": true, 00:08:47.393 "data_offset": 2048, 00:08:47.393 "data_size": 63488 00:08:47.393 } 00:08:47.393 ] 00:08:47.393 } 00:08:47.393 } 00:08:47.393 }' 00:08:47.393 23:26:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:47.650 23:26:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:08:47.650 pt2 00:08:47.650 pt3' 00:08:47.650 23:26:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:47.650 23:26:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:47.650 23:26:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:47.650 23:26:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:08:47.650 23:26:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:47.650 23:26:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:47.650 23:26:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.650 23:26:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:47.650 23:26:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:47.650 23:26:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:47.651 23:26:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:47.651 23:26:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:47.651 23:26:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:08:47.651 23:26:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:47.651 23:26:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.651 23:26:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:47.651 23:26:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:47.651 23:26:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:47.651 23:26:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:47.651 23:26:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:08:47.651 23:26:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:47.651 23:26:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.651 23:26:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:47.651 23:26:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:47.651 23:26:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:47.651 23:26:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:47.651 23:26:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:08:47.651 23:26:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:47.651 23:26:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:47.651 23:26:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.651 [2024-09-30 23:26:27.496280] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:47.909 23:26:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:47.909 23:26:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=843220cf-9490-40a8-a9b4-86a77388610d 00:08:47.909 23:26:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 843220cf-9490-40a8-a9b4-86a77388610d ']' 00:08:47.909 23:26:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:47.909 23:26:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:47.909 23:26:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.909 [2024-09-30 23:26:27.539951] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:47.909 [2024-09-30 23:26:27.539980] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:47.909 [2024-09-30 23:26:27.540051] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:47.909 [2024-09-30 23:26:27.540113] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:47.909 [2024-09-30 23:26:27.540128] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:08:47.909 23:26:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:47.909 23:26:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:47.909 23:26:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:08:47.909 23:26:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:47.909 23:26:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.909 23:26:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:47.909 23:26:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:08:47.909 23:26:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:08:47.909 23:26:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:47.909 23:26:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:08:47.909 23:26:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:47.909 23:26:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.909 23:26:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:47.909 23:26:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:47.909 23:26:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:08:47.909 23:26:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:47.909 23:26:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.909 23:26:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:47.909 23:26:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:47.909 23:26:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:08:47.909 23:26:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:47.909 23:26:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.909 23:26:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:47.909 23:26:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:08:47.909 23:26:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:47.909 23:26:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.909 23:26:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:08:47.909 23:26:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:47.909 23:26:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:08:47.909 23:26:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:08:47.909 23:26:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:08:47.909 23:26:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:08:47.909 23:26:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:08:47.909 23:26:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:47.909 23:26:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:08:47.909 23:26:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:47.909 23:26:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:08:47.909 23:26:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:47.909 23:26:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.909 [2024-09-30 23:26:27.675729] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:08:47.909 [2024-09-30 23:26:27.677665] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:08:47.909 [2024-09-30 23:26:27.677713] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:08:47.909 [2024-09-30 23:26:27.677761] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:08:47.909 [2024-09-30 23:26:27.677811] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:08:47.909 [2024-09-30 23:26:27.677833] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:08:47.909 [2024-09-30 23:26:27.677845] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:47.909 [2024-09-30 23:26:27.677855] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state configuring 00:08:47.909 request: 00:08:47.909 { 00:08:47.909 "name": "raid_bdev1", 00:08:47.909 "raid_level": "concat", 00:08:47.909 "base_bdevs": [ 00:08:47.909 "malloc1", 00:08:47.909 "malloc2", 00:08:47.909 "malloc3" 00:08:47.909 ], 00:08:47.909 "strip_size_kb": 64, 00:08:47.909 "superblock": false, 00:08:47.909 "method": "bdev_raid_create", 00:08:47.910 "req_id": 1 00:08:47.910 } 00:08:47.910 Got JSON-RPC error response 00:08:47.910 response: 00:08:47.910 { 00:08:47.910 "code": -17, 00:08:47.910 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:08:47.910 } 00:08:47.910 23:26:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:08:47.910 23:26:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:08:47.910 23:26:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:47.910 23:26:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:47.910 23:26:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:47.910 23:26:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:08:47.910 23:26:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:47.910 23:26:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:47.910 23:26:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.910 23:26:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:47.910 23:26:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:08:47.910 23:26:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:08:47.910 23:26:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:47.910 23:26:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:47.910 23:26:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.910 [2024-09-30 23:26:27.727606] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:47.910 [2024-09-30 23:26:27.727704] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:47.910 [2024-09-30 23:26:27.727735] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:08:47.910 [2024-09-30 23:26:27.727764] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:47.910 [2024-09-30 23:26:27.729933] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:47.910 [2024-09-30 23:26:27.730004] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:47.910 [2024-09-30 23:26:27.730088] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:08:47.910 [2024-09-30 23:26:27.730138] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:47.910 pt1 00:08:47.910 23:26:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:47.910 23:26:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:08:47.910 23:26:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:47.910 23:26:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:47.910 23:26:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:47.910 23:26:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:47.910 23:26:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:47.910 23:26:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:47.910 23:26:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:47.910 23:26:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:47.910 23:26:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:47.910 23:26:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:47.910 23:26:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:47.910 23:26:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.910 23:26:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:47.910 23:26:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:48.169 23:26:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:48.169 "name": "raid_bdev1", 00:08:48.169 "uuid": "843220cf-9490-40a8-a9b4-86a77388610d", 00:08:48.169 "strip_size_kb": 64, 00:08:48.169 "state": "configuring", 00:08:48.169 "raid_level": "concat", 00:08:48.169 "superblock": true, 00:08:48.169 "num_base_bdevs": 3, 00:08:48.169 "num_base_bdevs_discovered": 1, 00:08:48.169 "num_base_bdevs_operational": 3, 00:08:48.169 "base_bdevs_list": [ 00:08:48.169 { 00:08:48.169 "name": "pt1", 00:08:48.169 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:48.169 "is_configured": true, 00:08:48.169 "data_offset": 2048, 00:08:48.169 "data_size": 63488 00:08:48.169 }, 00:08:48.169 { 00:08:48.169 "name": null, 00:08:48.169 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:48.169 "is_configured": false, 00:08:48.169 "data_offset": 2048, 00:08:48.169 "data_size": 63488 00:08:48.169 }, 00:08:48.169 { 00:08:48.169 "name": null, 00:08:48.169 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:48.169 "is_configured": false, 00:08:48.169 "data_offset": 2048, 00:08:48.169 "data_size": 63488 00:08:48.169 } 00:08:48.169 ] 00:08:48.169 }' 00:08:48.169 23:26:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:48.169 23:26:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.428 23:26:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:08:48.428 23:26:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:48.428 23:26:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:48.428 23:26:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.428 [2024-09-30 23:26:28.134991] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:48.428 [2024-09-30 23:26:28.135111] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:48.428 [2024-09-30 23:26:28.135148] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:08:48.428 [2024-09-30 23:26:28.135180] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:48.428 [2024-09-30 23:26:28.135576] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:48.428 [2024-09-30 23:26:28.135642] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:48.428 [2024-09-30 23:26:28.135742] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:08:48.428 [2024-09-30 23:26:28.135794] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:48.428 pt2 00:08:48.428 23:26:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:48.428 23:26:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:08:48.428 23:26:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:48.428 23:26:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.428 [2024-09-30 23:26:28.142999] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:08:48.428 23:26:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:48.428 23:26:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:08:48.428 23:26:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:48.428 23:26:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:48.428 23:26:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:48.428 23:26:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:48.428 23:26:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:48.428 23:26:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:48.428 23:26:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:48.428 23:26:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:48.428 23:26:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:48.428 23:26:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:48.428 23:26:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:48.428 23:26:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.428 23:26:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:48.428 23:26:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:48.428 23:26:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:48.428 "name": "raid_bdev1", 00:08:48.428 "uuid": "843220cf-9490-40a8-a9b4-86a77388610d", 00:08:48.428 "strip_size_kb": 64, 00:08:48.428 "state": "configuring", 00:08:48.428 "raid_level": "concat", 00:08:48.428 "superblock": true, 00:08:48.428 "num_base_bdevs": 3, 00:08:48.428 "num_base_bdevs_discovered": 1, 00:08:48.428 "num_base_bdevs_operational": 3, 00:08:48.428 "base_bdevs_list": [ 00:08:48.428 { 00:08:48.428 "name": "pt1", 00:08:48.428 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:48.428 "is_configured": true, 00:08:48.428 "data_offset": 2048, 00:08:48.428 "data_size": 63488 00:08:48.428 }, 00:08:48.428 { 00:08:48.428 "name": null, 00:08:48.428 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:48.428 "is_configured": false, 00:08:48.428 "data_offset": 0, 00:08:48.428 "data_size": 63488 00:08:48.428 }, 00:08:48.428 { 00:08:48.428 "name": null, 00:08:48.428 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:48.428 "is_configured": false, 00:08:48.428 "data_offset": 2048, 00:08:48.428 "data_size": 63488 00:08:48.428 } 00:08:48.428 ] 00:08:48.428 }' 00:08:48.428 23:26:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:48.428 23:26:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.995 23:26:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:08:48.995 23:26:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:48.995 23:26:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:48.995 23:26:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:48.995 23:26:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.995 [2024-09-30 23:26:28.554315] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:48.995 [2024-09-30 23:26:28.554457] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:48.995 [2024-09-30 23:26:28.554493] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:08:48.996 [2024-09-30 23:26:28.554520] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:48.996 [2024-09-30 23:26:28.554950] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:48.996 [2024-09-30 23:26:28.555016] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:48.996 [2024-09-30 23:26:28.555123] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:08:48.996 [2024-09-30 23:26:28.555172] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:48.996 pt2 00:08:48.996 23:26:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:48.996 23:26:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:08:48.996 23:26:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:48.996 23:26:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:08:48.996 23:26:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:48.996 23:26:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.996 [2024-09-30 23:26:28.566271] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:08:48.996 [2024-09-30 23:26:28.566360] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:48.996 [2024-09-30 23:26:28.566393] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:08:48.996 [2024-09-30 23:26:28.566419] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:48.996 [2024-09-30 23:26:28.566774] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:48.996 [2024-09-30 23:26:28.566834] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:08:48.996 [2024-09-30 23:26:28.566926] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:08:48.996 [2024-09-30 23:26:28.566974] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:08:48.996 [2024-09-30 23:26:28.567083] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:08:48.996 [2024-09-30 23:26:28.567120] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:48.996 [2024-09-30 23:26:28.567362] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:08:48.996 [2024-09-30 23:26:28.567501] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:08:48.996 [2024-09-30 23:26:28.567542] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006980 00:08:48.996 [2024-09-30 23:26:28.567674] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:48.996 pt3 00:08:48.996 23:26:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:48.996 23:26:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:08:48.996 23:26:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:48.996 23:26:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:08:48.996 23:26:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:48.996 23:26:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:48.996 23:26:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:48.996 23:26:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:48.996 23:26:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:48.996 23:26:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:48.996 23:26:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:48.996 23:26:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:48.996 23:26:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:48.996 23:26:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:48.996 23:26:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:48.996 23:26:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:48.996 23:26:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.996 23:26:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:48.996 23:26:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:48.996 "name": "raid_bdev1", 00:08:48.996 "uuid": "843220cf-9490-40a8-a9b4-86a77388610d", 00:08:48.996 "strip_size_kb": 64, 00:08:48.996 "state": "online", 00:08:48.996 "raid_level": "concat", 00:08:48.996 "superblock": true, 00:08:48.996 "num_base_bdevs": 3, 00:08:48.996 "num_base_bdevs_discovered": 3, 00:08:48.996 "num_base_bdevs_operational": 3, 00:08:48.996 "base_bdevs_list": [ 00:08:48.996 { 00:08:48.996 "name": "pt1", 00:08:48.996 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:48.996 "is_configured": true, 00:08:48.996 "data_offset": 2048, 00:08:48.996 "data_size": 63488 00:08:48.996 }, 00:08:48.996 { 00:08:48.996 "name": "pt2", 00:08:48.996 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:48.996 "is_configured": true, 00:08:48.996 "data_offset": 2048, 00:08:48.996 "data_size": 63488 00:08:48.996 }, 00:08:48.996 { 00:08:48.996 "name": "pt3", 00:08:48.996 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:48.996 "is_configured": true, 00:08:48.996 "data_offset": 2048, 00:08:48.996 "data_size": 63488 00:08:48.996 } 00:08:48.996 ] 00:08:48.996 }' 00:08:48.996 23:26:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:48.996 23:26:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.255 23:26:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:08:49.255 23:26:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:08:49.255 23:26:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:49.255 23:26:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:49.255 23:26:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:49.255 23:26:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:49.255 23:26:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:49.255 23:26:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:49.255 23:26:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.255 23:26:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:49.255 [2024-09-30 23:26:29.021794] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:49.255 23:26:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:49.255 23:26:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:49.255 "name": "raid_bdev1", 00:08:49.255 "aliases": [ 00:08:49.255 "843220cf-9490-40a8-a9b4-86a77388610d" 00:08:49.255 ], 00:08:49.255 "product_name": "Raid Volume", 00:08:49.255 "block_size": 512, 00:08:49.255 "num_blocks": 190464, 00:08:49.255 "uuid": "843220cf-9490-40a8-a9b4-86a77388610d", 00:08:49.255 "assigned_rate_limits": { 00:08:49.255 "rw_ios_per_sec": 0, 00:08:49.255 "rw_mbytes_per_sec": 0, 00:08:49.255 "r_mbytes_per_sec": 0, 00:08:49.255 "w_mbytes_per_sec": 0 00:08:49.255 }, 00:08:49.255 "claimed": false, 00:08:49.255 "zoned": false, 00:08:49.255 "supported_io_types": { 00:08:49.255 "read": true, 00:08:49.255 "write": true, 00:08:49.255 "unmap": true, 00:08:49.255 "flush": true, 00:08:49.255 "reset": true, 00:08:49.255 "nvme_admin": false, 00:08:49.255 "nvme_io": false, 00:08:49.255 "nvme_io_md": false, 00:08:49.255 "write_zeroes": true, 00:08:49.255 "zcopy": false, 00:08:49.255 "get_zone_info": false, 00:08:49.255 "zone_management": false, 00:08:49.255 "zone_append": false, 00:08:49.255 "compare": false, 00:08:49.255 "compare_and_write": false, 00:08:49.256 "abort": false, 00:08:49.256 "seek_hole": false, 00:08:49.256 "seek_data": false, 00:08:49.256 "copy": false, 00:08:49.256 "nvme_iov_md": false 00:08:49.256 }, 00:08:49.256 "memory_domains": [ 00:08:49.256 { 00:08:49.256 "dma_device_id": "system", 00:08:49.256 "dma_device_type": 1 00:08:49.256 }, 00:08:49.256 { 00:08:49.256 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:49.256 "dma_device_type": 2 00:08:49.256 }, 00:08:49.256 { 00:08:49.256 "dma_device_id": "system", 00:08:49.256 "dma_device_type": 1 00:08:49.256 }, 00:08:49.256 { 00:08:49.256 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:49.256 "dma_device_type": 2 00:08:49.256 }, 00:08:49.256 { 00:08:49.256 "dma_device_id": "system", 00:08:49.256 "dma_device_type": 1 00:08:49.256 }, 00:08:49.256 { 00:08:49.256 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:49.256 "dma_device_type": 2 00:08:49.256 } 00:08:49.256 ], 00:08:49.256 "driver_specific": { 00:08:49.256 "raid": { 00:08:49.256 "uuid": "843220cf-9490-40a8-a9b4-86a77388610d", 00:08:49.256 "strip_size_kb": 64, 00:08:49.256 "state": "online", 00:08:49.256 "raid_level": "concat", 00:08:49.256 "superblock": true, 00:08:49.256 "num_base_bdevs": 3, 00:08:49.256 "num_base_bdevs_discovered": 3, 00:08:49.256 "num_base_bdevs_operational": 3, 00:08:49.256 "base_bdevs_list": [ 00:08:49.256 { 00:08:49.256 "name": "pt1", 00:08:49.256 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:49.256 "is_configured": true, 00:08:49.256 "data_offset": 2048, 00:08:49.256 "data_size": 63488 00:08:49.256 }, 00:08:49.256 { 00:08:49.256 "name": "pt2", 00:08:49.256 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:49.256 "is_configured": true, 00:08:49.256 "data_offset": 2048, 00:08:49.256 "data_size": 63488 00:08:49.256 }, 00:08:49.256 { 00:08:49.256 "name": "pt3", 00:08:49.256 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:49.256 "is_configured": true, 00:08:49.256 "data_offset": 2048, 00:08:49.256 "data_size": 63488 00:08:49.256 } 00:08:49.256 ] 00:08:49.256 } 00:08:49.256 } 00:08:49.256 }' 00:08:49.256 23:26:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:49.513 23:26:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:08:49.513 pt2 00:08:49.513 pt3' 00:08:49.513 23:26:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:49.513 23:26:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:49.513 23:26:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:49.513 23:26:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:08:49.513 23:26:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:49.513 23:26:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.513 23:26:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:49.513 23:26:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:49.513 23:26:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:49.513 23:26:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:49.513 23:26:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:49.514 23:26:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:49.514 23:26:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:08:49.514 23:26:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:49.514 23:26:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.514 23:26:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:49.514 23:26:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:49.514 23:26:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:49.514 23:26:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:49.514 23:26:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:08:49.514 23:26:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:49.514 23:26:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.514 23:26:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:49.514 23:26:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:49.514 23:26:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:49.514 23:26:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:49.514 23:26:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:08:49.514 23:26:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:49.514 23:26:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:49.514 23:26:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.514 [2024-09-30 23:26:29.301244] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:49.514 23:26:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:49.514 23:26:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 843220cf-9490-40a8-a9b4-86a77388610d '!=' 843220cf-9490-40a8-a9b4-86a77388610d ']' 00:08:49.514 23:26:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:08:49.514 23:26:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:49.514 23:26:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:49.514 23:26:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 78003 00:08:49.514 23:26:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 78003 ']' 00:08:49.514 23:26:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 78003 00:08:49.514 23:26:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:08:49.514 23:26:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:49.514 23:26:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 78003 00:08:49.514 23:26:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:49.514 23:26:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:49.514 23:26:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 78003' 00:08:49.514 killing process with pid 78003 00:08:49.514 23:26:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 78003 00:08:49.514 [2024-09-30 23:26:29.363207] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:49.514 [2024-09-30 23:26:29.363350] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:49.514 [2024-09-30 23:26:29.363442] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:49.514 [2024-09-30 23:26:29.363485] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name raid_bdev1, state offline 00:08:49.514 23:26:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 78003 00:08:49.771 [2024-09-30 23:26:29.396480] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:50.030 23:26:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:08:50.030 00:08:50.030 real 0m3.893s 00:08:50.030 user 0m6.077s 00:08:50.030 sys 0m0.854s 00:08:50.030 23:26:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:50.030 ************************************ 00:08:50.030 END TEST raid_superblock_test 00:08:50.030 ************************************ 00:08:50.030 23:26:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.030 23:26:29 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 3 read 00:08:50.030 23:26:29 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:08:50.030 23:26:29 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:50.030 23:26:29 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:50.030 ************************************ 00:08:50.030 START TEST raid_read_error_test 00:08:50.030 ************************************ 00:08:50.030 23:26:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test concat 3 read 00:08:50.030 23:26:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:08:50.030 23:26:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:08:50.030 23:26:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:08:50.030 23:26:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:08:50.030 23:26:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:50.030 23:26:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:08:50.030 23:26:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:50.030 23:26:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:50.030 23:26:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:08:50.030 23:26:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:50.030 23:26:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:50.030 23:26:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:08:50.030 23:26:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:50.030 23:26:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:50.030 23:26:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:08:50.030 23:26:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:08:50.030 23:26:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:08:50.030 23:26:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:08:50.030 23:26:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:08:50.030 23:26:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:08:50.030 23:26:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:08:50.030 23:26:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:08:50.030 23:26:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:08:50.030 23:26:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:08:50.030 23:26:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:08:50.030 23:26:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.pKSa3y9XWj 00:08:50.030 23:26:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=78235 00:08:50.030 23:26:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 78235 00:08:50.030 23:26:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:50.030 23:26:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 78235 ']' 00:08:50.030 23:26:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:50.030 23:26:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:50.030 23:26:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:50.030 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:50.030 23:26:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:50.030 23:26:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.030 [2024-09-30 23:26:29.815687] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:08:50.030 [2024-09-30 23:26:29.815901] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78235 ] 00:08:50.289 [2024-09-30 23:26:29.976746] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:50.290 [2024-09-30 23:26:30.022264] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:50.290 [2024-09-30 23:26:30.064851] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:50.290 [2024-09-30 23:26:30.064889] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:50.855 23:26:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:50.855 23:26:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:08:50.855 23:26:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:50.855 23:26:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:50.855 23:26:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:50.855 23:26:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.855 BaseBdev1_malloc 00:08:50.855 23:26:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:50.855 23:26:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:08:50.855 23:26:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:50.855 23:26:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.855 true 00:08:50.855 23:26:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:50.855 23:26:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:50.855 23:26:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:50.855 23:26:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.855 [2024-09-30 23:26:30.699021] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:50.855 [2024-09-30 23:26:30.699081] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:50.855 [2024-09-30 23:26:30.699100] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:08:50.855 [2024-09-30 23:26:30.699109] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:50.855 [2024-09-30 23:26:30.701265] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:50.855 [2024-09-30 23:26:30.701304] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:50.855 BaseBdev1 00:08:50.855 23:26:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:50.855 23:26:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:50.855 23:26:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:50.855 23:26:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:50.855 23:26:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.114 BaseBdev2_malloc 00:08:51.114 23:26:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:51.114 23:26:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:08:51.114 23:26:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:51.114 23:26:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.114 true 00:08:51.114 23:26:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:51.114 23:26:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:51.114 23:26:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:51.114 23:26:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.114 [2024-09-30 23:26:30.757634] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:51.114 [2024-09-30 23:26:30.757707] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:51.114 [2024-09-30 23:26:30.757736] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:08:51.114 [2024-09-30 23:26:30.757749] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:51.114 [2024-09-30 23:26:30.760527] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:51.114 [2024-09-30 23:26:30.760570] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:51.114 BaseBdev2 00:08:51.114 23:26:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:51.114 23:26:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:51.114 23:26:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:08:51.114 23:26:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:51.114 23:26:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.114 BaseBdev3_malloc 00:08:51.114 23:26:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:51.114 23:26:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:08:51.114 23:26:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:51.114 23:26:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.114 true 00:08:51.114 23:26:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:51.114 23:26:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:08:51.114 23:26:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:51.114 23:26:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.114 [2024-09-30 23:26:30.798106] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:08:51.114 [2024-09-30 23:26:30.798156] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:51.114 [2024-09-30 23:26:30.798173] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:08:51.114 [2024-09-30 23:26:30.798181] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:51.114 [2024-09-30 23:26:30.800199] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:51.114 [2024-09-30 23:26:30.800237] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:08:51.114 BaseBdev3 00:08:51.114 23:26:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:51.114 23:26:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:08:51.114 23:26:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:51.114 23:26:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.114 [2024-09-30 23:26:30.810141] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:51.114 [2024-09-30 23:26:30.811954] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:51.114 [2024-09-30 23:26:30.812033] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:51.114 [2024-09-30 23:26:30.812202] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:08:51.114 [2024-09-30 23:26:30.812218] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:51.114 [2024-09-30 23:26:30.812473] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:08:51.114 [2024-09-30 23:26:30.812599] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:08:51.114 [2024-09-30 23:26:30.812620] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006d00 00:08:51.114 [2024-09-30 23:26:30.812750] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:51.114 23:26:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:51.114 23:26:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:08:51.114 23:26:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:51.114 23:26:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:51.114 23:26:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:51.114 23:26:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:51.114 23:26:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:51.114 23:26:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:51.114 23:26:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:51.114 23:26:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:51.114 23:26:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:51.114 23:26:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:51.114 23:26:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:51.114 23:26:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:51.114 23:26:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.114 23:26:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:51.114 23:26:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:51.114 "name": "raid_bdev1", 00:08:51.114 "uuid": "49c86cdf-4d27-4a72-ad75-80e73471f1b2", 00:08:51.114 "strip_size_kb": 64, 00:08:51.114 "state": "online", 00:08:51.114 "raid_level": "concat", 00:08:51.114 "superblock": true, 00:08:51.114 "num_base_bdevs": 3, 00:08:51.114 "num_base_bdevs_discovered": 3, 00:08:51.114 "num_base_bdevs_operational": 3, 00:08:51.114 "base_bdevs_list": [ 00:08:51.114 { 00:08:51.114 "name": "BaseBdev1", 00:08:51.114 "uuid": "5719afdd-e7c1-5d33-b1b4-3a01b87312e8", 00:08:51.114 "is_configured": true, 00:08:51.114 "data_offset": 2048, 00:08:51.114 "data_size": 63488 00:08:51.114 }, 00:08:51.114 { 00:08:51.114 "name": "BaseBdev2", 00:08:51.114 "uuid": "40c0bbe3-d0a0-56e2-add6-03b347ca95b3", 00:08:51.114 "is_configured": true, 00:08:51.114 "data_offset": 2048, 00:08:51.114 "data_size": 63488 00:08:51.114 }, 00:08:51.114 { 00:08:51.114 "name": "BaseBdev3", 00:08:51.115 "uuid": "00cf57cd-ec2f-54fd-b26e-1e117c8b644b", 00:08:51.115 "is_configured": true, 00:08:51.115 "data_offset": 2048, 00:08:51.115 "data_size": 63488 00:08:51.115 } 00:08:51.115 ] 00:08:51.115 }' 00:08:51.115 23:26:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:51.115 23:26:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.372 23:26:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:08:51.372 23:26:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:08:51.631 [2024-09-30 23:26:31.281729] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:08:52.567 23:26:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:08:52.567 23:26:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:52.567 23:26:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.568 23:26:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:52.568 23:26:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:08:52.568 23:26:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:08:52.568 23:26:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:08:52.568 23:26:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:08:52.568 23:26:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:52.568 23:26:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:52.568 23:26:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:52.568 23:26:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:52.568 23:26:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:52.568 23:26:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:52.568 23:26:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:52.568 23:26:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:52.568 23:26:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:52.568 23:26:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:52.568 23:26:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:52.568 23:26:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:52.568 23:26:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.568 23:26:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:52.568 23:26:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:52.568 "name": "raid_bdev1", 00:08:52.568 "uuid": "49c86cdf-4d27-4a72-ad75-80e73471f1b2", 00:08:52.568 "strip_size_kb": 64, 00:08:52.568 "state": "online", 00:08:52.568 "raid_level": "concat", 00:08:52.568 "superblock": true, 00:08:52.568 "num_base_bdevs": 3, 00:08:52.568 "num_base_bdevs_discovered": 3, 00:08:52.568 "num_base_bdevs_operational": 3, 00:08:52.568 "base_bdevs_list": [ 00:08:52.568 { 00:08:52.568 "name": "BaseBdev1", 00:08:52.568 "uuid": "5719afdd-e7c1-5d33-b1b4-3a01b87312e8", 00:08:52.568 "is_configured": true, 00:08:52.568 "data_offset": 2048, 00:08:52.568 "data_size": 63488 00:08:52.568 }, 00:08:52.568 { 00:08:52.568 "name": "BaseBdev2", 00:08:52.568 "uuid": "40c0bbe3-d0a0-56e2-add6-03b347ca95b3", 00:08:52.568 "is_configured": true, 00:08:52.568 "data_offset": 2048, 00:08:52.568 "data_size": 63488 00:08:52.568 }, 00:08:52.568 { 00:08:52.568 "name": "BaseBdev3", 00:08:52.568 "uuid": "00cf57cd-ec2f-54fd-b26e-1e117c8b644b", 00:08:52.568 "is_configured": true, 00:08:52.568 "data_offset": 2048, 00:08:52.568 "data_size": 63488 00:08:52.568 } 00:08:52.568 ] 00:08:52.568 }' 00:08:52.568 23:26:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:52.568 23:26:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.825 23:26:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:52.826 23:26:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:52.826 23:26:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.826 [2024-09-30 23:26:32.637392] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:52.826 [2024-09-30 23:26:32.637436] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:52.826 { 00:08:52.826 "results": [ 00:08:52.826 { 00:08:52.826 "job": "raid_bdev1", 00:08:52.826 "core_mask": "0x1", 00:08:52.826 "workload": "randrw", 00:08:52.826 "percentage": 50, 00:08:52.826 "status": "finished", 00:08:52.826 "queue_depth": 1, 00:08:52.826 "io_size": 131072, 00:08:52.826 "runtime": 1.356438, 00:08:52.826 "iops": 17416.94054575292, 00:08:52.826 "mibps": 2177.117568219115, 00:08:52.826 "io_failed": 1, 00:08:52.826 "io_timeout": 0, 00:08:52.826 "avg_latency_us": 79.60662684918584, 00:08:52.826 "min_latency_us": 24.258515283842794, 00:08:52.826 "max_latency_us": 1416.6078602620087 00:08:52.826 } 00:08:52.826 ], 00:08:52.826 "core_count": 1 00:08:52.826 } 00:08:52.826 [2024-09-30 23:26:32.639974] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:52.826 [2024-09-30 23:26:32.640025] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:52.826 [2024-09-30 23:26:32.640061] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:52.826 [2024-09-30 23:26:32.640074] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name raid_bdev1, state offline 00:08:52.826 23:26:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:52.826 23:26:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 78235 00:08:52.826 23:26:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 78235 ']' 00:08:52.826 23:26:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 78235 00:08:52.826 23:26:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:08:52.826 23:26:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:52.826 23:26:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 78235 00:08:53.084 23:26:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:53.084 killing process with pid 78235 00:08:53.084 23:26:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:53.085 23:26:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 78235' 00:08:53.085 23:26:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 78235 00:08:53.085 [2024-09-30 23:26:32.689074] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:53.085 23:26:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 78235 00:08:53.085 [2024-09-30 23:26:32.714323] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:53.344 23:26:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.pKSa3y9XWj 00:08:53.344 23:26:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:08:53.344 23:26:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:08:53.344 23:26:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.74 00:08:53.344 23:26:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:08:53.344 23:26:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:53.344 23:26:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:53.344 23:26:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.74 != \0\.\0\0 ]] 00:08:53.344 00:08:53.344 real 0m3.241s 00:08:53.344 user 0m4.004s 00:08:53.344 sys 0m0.575s 00:08:53.344 ************************************ 00:08:53.344 END TEST raid_read_error_test 00:08:53.344 ************************************ 00:08:53.344 23:26:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:53.344 23:26:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.344 23:26:33 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 3 write 00:08:53.344 23:26:33 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:08:53.344 23:26:33 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:53.344 23:26:33 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:53.344 ************************************ 00:08:53.344 START TEST raid_write_error_test 00:08:53.344 ************************************ 00:08:53.344 23:26:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test concat 3 write 00:08:53.344 23:26:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:08:53.344 23:26:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:08:53.344 23:26:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:08:53.344 23:26:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:08:53.344 23:26:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:53.344 23:26:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:08:53.344 23:26:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:53.344 23:26:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:53.344 23:26:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:08:53.344 23:26:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:53.344 23:26:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:53.344 23:26:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:08:53.344 23:26:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:53.344 23:26:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:53.344 23:26:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:08:53.344 23:26:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:08:53.344 23:26:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:08:53.344 23:26:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:08:53.344 23:26:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:08:53.344 23:26:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:08:53.344 23:26:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:08:53.344 23:26:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:08:53.344 23:26:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:08:53.344 23:26:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:08:53.344 23:26:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:08:53.344 23:26:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.OEmbf6Xg5e 00:08:53.344 23:26:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=78370 00:08:53.344 23:26:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:53.344 23:26:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 78370 00:08:53.344 23:26:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 78370 ']' 00:08:53.344 23:26:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:53.344 23:26:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:53.344 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:53.344 23:26:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:53.344 23:26:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:53.344 23:26:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.344 [2024-09-30 23:26:33.133767] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:08:53.345 [2024-09-30 23:26:33.133910] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78370 ] 00:08:53.603 [2024-09-30 23:26:33.294357] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:53.603 [2024-09-30 23:26:33.339341] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:53.603 [2024-09-30 23:26:33.381726] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:53.603 [2024-09-30 23:26:33.381760] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:54.171 23:26:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:54.171 23:26:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:08:54.171 23:26:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:54.171 23:26:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:54.171 23:26:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:54.171 23:26:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.171 BaseBdev1_malloc 00:08:54.171 23:26:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:54.171 23:26:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:08:54.171 23:26:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:54.171 23:26:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.171 true 00:08:54.171 23:26:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:54.171 23:26:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:54.171 23:26:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:54.171 23:26:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.171 [2024-09-30 23:26:33.991747] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:54.171 [2024-09-30 23:26:33.991809] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:54.171 [2024-09-30 23:26:33.991833] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:08:54.171 [2024-09-30 23:26:33.991844] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:54.171 [2024-09-30 23:26:33.994019] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:54.171 [2024-09-30 23:26:33.994057] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:54.171 BaseBdev1 00:08:54.171 23:26:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:54.171 23:26:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:54.171 23:26:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:54.171 23:26:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:54.171 23:26:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.431 BaseBdev2_malloc 00:08:54.431 23:26:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:54.431 23:26:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:08:54.431 23:26:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:54.431 23:26:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.431 true 00:08:54.431 23:26:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:54.431 23:26:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:54.431 23:26:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:54.431 23:26:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.431 [2024-09-30 23:26:34.044169] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:54.431 [2024-09-30 23:26:34.044220] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:54.431 [2024-09-30 23:26:34.044238] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:08:54.431 [2024-09-30 23:26:34.044246] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:54.431 [2024-09-30 23:26:34.046247] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:54.431 [2024-09-30 23:26:34.046284] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:54.431 BaseBdev2 00:08:54.431 23:26:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:54.431 23:26:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:54.431 23:26:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:08:54.431 23:26:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:54.431 23:26:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.431 BaseBdev3_malloc 00:08:54.431 23:26:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:54.431 23:26:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:08:54.431 23:26:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:54.431 23:26:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.431 true 00:08:54.431 23:26:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:54.431 23:26:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:08:54.431 23:26:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:54.431 23:26:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.431 [2024-09-30 23:26:34.084594] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:08:54.431 [2024-09-30 23:26:34.084643] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:54.431 [2024-09-30 23:26:34.084660] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:08:54.431 [2024-09-30 23:26:34.084669] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:54.431 [2024-09-30 23:26:34.086661] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:54.431 [2024-09-30 23:26:34.086702] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:08:54.431 BaseBdev3 00:08:54.431 23:26:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:54.431 23:26:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:08:54.431 23:26:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:54.431 23:26:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.431 [2024-09-30 23:26:34.096629] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:54.431 [2024-09-30 23:26:34.098378] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:54.431 [2024-09-30 23:26:34.098516] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:54.431 [2024-09-30 23:26:34.098688] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:08:54.431 [2024-09-30 23:26:34.098706] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:54.431 [2024-09-30 23:26:34.098964] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:08:54.431 [2024-09-30 23:26:34.099099] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:08:54.431 [2024-09-30 23:26:34.099109] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006d00 00:08:54.431 [2024-09-30 23:26:34.099254] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:54.431 23:26:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:54.431 23:26:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:08:54.431 23:26:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:54.431 23:26:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:54.431 23:26:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:54.431 23:26:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:54.431 23:26:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:54.431 23:26:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:54.431 23:26:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:54.431 23:26:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:54.431 23:26:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:54.431 23:26:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:54.431 23:26:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:54.431 23:26:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:54.431 23:26:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.431 23:26:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:54.431 23:26:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:54.431 "name": "raid_bdev1", 00:08:54.431 "uuid": "412877c6-c6db-4e33-bbfb-c3b39006c47a", 00:08:54.431 "strip_size_kb": 64, 00:08:54.431 "state": "online", 00:08:54.431 "raid_level": "concat", 00:08:54.431 "superblock": true, 00:08:54.431 "num_base_bdevs": 3, 00:08:54.431 "num_base_bdevs_discovered": 3, 00:08:54.431 "num_base_bdevs_operational": 3, 00:08:54.431 "base_bdevs_list": [ 00:08:54.431 { 00:08:54.431 "name": "BaseBdev1", 00:08:54.431 "uuid": "8a402045-984c-5089-81a6-1dee8314a3e5", 00:08:54.431 "is_configured": true, 00:08:54.431 "data_offset": 2048, 00:08:54.431 "data_size": 63488 00:08:54.431 }, 00:08:54.431 { 00:08:54.431 "name": "BaseBdev2", 00:08:54.431 "uuid": "e865ba32-c028-5607-bb33-b0cda2410daa", 00:08:54.431 "is_configured": true, 00:08:54.431 "data_offset": 2048, 00:08:54.431 "data_size": 63488 00:08:54.431 }, 00:08:54.431 { 00:08:54.431 "name": "BaseBdev3", 00:08:54.431 "uuid": "0c7cb6a8-8e58-58ad-af68-04ba59ca97ff", 00:08:54.431 "is_configured": true, 00:08:54.431 "data_offset": 2048, 00:08:54.431 "data_size": 63488 00:08:54.431 } 00:08:54.431 ] 00:08:54.431 }' 00:08:54.431 23:26:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:54.431 23:26:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.690 23:26:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:08:54.690 23:26:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:08:54.949 [2024-09-30 23:26:34.580227] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:08:55.887 23:26:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:08:55.887 23:26:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:55.887 23:26:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.887 23:26:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:55.887 23:26:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:08:55.887 23:26:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:08:55.887 23:26:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:08:55.887 23:26:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:08:55.887 23:26:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:55.887 23:26:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:55.887 23:26:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:55.887 23:26:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:55.887 23:26:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:55.887 23:26:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:55.887 23:26:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:55.887 23:26:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:55.887 23:26:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:55.887 23:26:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:55.887 23:26:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:55.887 23:26:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:55.887 23:26:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.887 23:26:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:55.887 23:26:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:55.887 "name": "raid_bdev1", 00:08:55.887 "uuid": "412877c6-c6db-4e33-bbfb-c3b39006c47a", 00:08:55.887 "strip_size_kb": 64, 00:08:55.887 "state": "online", 00:08:55.887 "raid_level": "concat", 00:08:55.887 "superblock": true, 00:08:55.887 "num_base_bdevs": 3, 00:08:55.887 "num_base_bdevs_discovered": 3, 00:08:55.887 "num_base_bdevs_operational": 3, 00:08:55.887 "base_bdevs_list": [ 00:08:55.887 { 00:08:55.887 "name": "BaseBdev1", 00:08:55.887 "uuid": "8a402045-984c-5089-81a6-1dee8314a3e5", 00:08:55.887 "is_configured": true, 00:08:55.887 "data_offset": 2048, 00:08:55.887 "data_size": 63488 00:08:55.887 }, 00:08:55.887 { 00:08:55.887 "name": "BaseBdev2", 00:08:55.887 "uuid": "e865ba32-c028-5607-bb33-b0cda2410daa", 00:08:55.887 "is_configured": true, 00:08:55.887 "data_offset": 2048, 00:08:55.887 "data_size": 63488 00:08:55.887 }, 00:08:55.887 { 00:08:55.887 "name": "BaseBdev3", 00:08:55.887 "uuid": "0c7cb6a8-8e58-58ad-af68-04ba59ca97ff", 00:08:55.887 "is_configured": true, 00:08:55.887 "data_offset": 2048, 00:08:55.887 "data_size": 63488 00:08:55.887 } 00:08:55.887 ] 00:08:55.887 }' 00:08:55.887 23:26:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:55.887 23:26:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.147 23:26:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:56.147 23:26:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:56.147 23:26:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.147 [2024-09-30 23:26:35.931566] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:56.147 [2024-09-30 23:26:35.931607] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:56.147 [2024-09-30 23:26:35.934072] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:56.147 [2024-09-30 23:26:35.934130] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:56.147 [2024-09-30 23:26:35.934164] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:56.147 [2024-09-30 23:26:35.934181] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name raid_bdev1, state offline 00:08:56.147 { 00:08:56.147 "results": [ 00:08:56.147 { 00:08:56.147 "job": "raid_bdev1", 00:08:56.147 "core_mask": "0x1", 00:08:56.147 "workload": "randrw", 00:08:56.147 "percentage": 50, 00:08:56.147 "status": "finished", 00:08:56.147 "queue_depth": 1, 00:08:56.147 "io_size": 131072, 00:08:56.147 "runtime": 1.352149, 00:08:56.147 "iops": 17628.234758151655, 00:08:56.147 "mibps": 2203.529344768957, 00:08:56.147 "io_failed": 1, 00:08:56.147 "io_timeout": 0, 00:08:56.147 "avg_latency_us": 78.6491820264742, 00:08:56.147 "min_latency_us": 24.258515283842794, 00:08:56.147 "max_latency_us": 1359.3711790393013 00:08:56.147 } 00:08:56.147 ], 00:08:56.147 "core_count": 1 00:08:56.147 } 00:08:56.147 23:26:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:56.147 23:26:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 78370 00:08:56.147 23:26:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 78370 ']' 00:08:56.147 23:26:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 78370 00:08:56.147 23:26:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:08:56.147 23:26:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:56.147 23:26:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 78370 00:08:56.147 23:26:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:56.147 23:26:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:56.147 23:26:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 78370' 00:08:56.147 killing process with pid 78370 00:08:56.147 23:26:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 78370 00:08:56.147 [2024-09-30 23:26:35.979256] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:56.147 23:26:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 78370 00:08:56.407 [2024-09-30 23:26:36.005180] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:56.407 23:26:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.OEmbf6Xg5e 00:08:56.407 23:26:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:08:56.407 23:26:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:08:56.407 23:26:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.74 00:08:56.407 23:26:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:08:56.407 23:26:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:56.407 23:26:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:56.407 23:26:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.74 != \0\.\0\0 ]] 00:08:56.407 00:08:56.407 real 0m3.224s 00:08:56.407 user 0m3.990s 00:08:56.407 sys 0m0.544s 00:08:56.407 23:26:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:56.407 23:26:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.407 ************************************ 00:08:56.407 END TEST raid_write_error_test 00:08:56.407 ************************************ 00:08:56.667 23:26:36 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:08:56.667 23:26:36 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 3 false 00:08:56.667 23:26:36 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:08:56.668 23:26:36 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:56.668 23:26:36 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:56.668 ************************************ 00:08:56.668 START TEST raid_state_function_test 00:08:56.668 ************************************ 00:08:56.668 23:26:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 3 false 00:08:56.668 23:26:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:08:56.668 23:26:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:08:56.668 23:26:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:08:56.668 23:26:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:56.668 23:26:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:56.668 23:26:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:56.668 23:26:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:56.668 23:26:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:56.668 23:26:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:56.668 23:26:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:56.668 23:26:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:56.668 23:26:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:56.668 23:26:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:08:56.668 23:26:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:56.668 23:26:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:56.668 23:26:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:08:56.668 23:26:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:56.668 23:26:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:56.668 23:26:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:56.668 23:26:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:56.668 23:26:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:56.668 23:26:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:08:56.668 23:26:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:08:56.668 23:26:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:08:56.668 23:26:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:08:56.668 23:26:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=78497 00:08:56.668 23:26:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:56.668 23:26:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 78497' 00:08:56.668 Process raid pid: 78497 00:08:56.668 23:26:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 78497 00:08:56.668 23:26:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 78497 ']' 00:08:56.668 23:26:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:56.668 23:26:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:56.668 23:26:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:56.668 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:56.668 23:26:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:56.668 23:26:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.668 [2024-09-30 23:26:36.430971] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:08:56.668 [2024-09-30 23:26:36.431174] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:56.928 [2024-09-30 23:26:36.591396] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:56.928 [2024-09-30 23:26:36.637543] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:56.928 [2024-09-30 23:26:36.680161] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:56.928 [2024-09-30 23:26:36.680278] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:57.497 23:26:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:57.497 23:26:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:08:57.497 23:26:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:57.497 23:26:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:57.497 23:26:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.497 [2024-09-30 23:26:37.273813] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:57.497 [2024-09-30 23:26:37.273923] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:57.497 [2024-09-30 23:26:37.273956] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:57.497 [2024-09-30 23:26:37.273979] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:57.497 [2024-09-30 23:26:37.273997] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:57.497 [2024-09-30 23:26:37.274020] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:57.497 23:26:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:57.497 23:26:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:08:57.497 23:26:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:57.497 23:26:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:57.497 23:26:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:57.497 23:26:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:57.497 23:26:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:57.497 23:26:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:57.497 23:26:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:57.497 23:26:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:57.497 23:26:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:57.497 23:26:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:57.497 23:26:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:57.497 23:26:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:57.497 23:26:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.497 23:26:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:57.497 23:26:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:57.497 "name": "Existed_Raid", 00:08:57.497 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:57.497 "strip_size_kb": 0, 00:08:57.497 "state": "configuring", 00:08:57.497 "raid_level": "raid1", 00:08:57.497 "superblock": false, 00:08:57.497 "num_base_bdevs": 3, 00:08:57.497 "num_base_bdevs_discovered": 0, 00:08:57.497 "num_base_bdevs_operational": 3, 00:08:57.497 "base_bdevs_list": [ 00:08:57.497 { 00:08:57.497 "name": "BaseBdev1", 00:08:57.497 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:57.497 "is_configured": false, 00:08:57.497 "data_offset": 0, 00:08:57.497 "data_size": 0 00:08:57.497 }, 00:08:57.497 { 00:08:57.497 "name": "BaseBdev2", 00:08:57.497 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:57.497 "is_configured": false, 00:08:57.497 "data_offset": 0, 00:08:57.497 "data_size": 0 00:08:57.497 }, 00:08:57.497 { 00:08:57.497 "name": "BaseBdev3", 00:08:57.497 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:57.497 "is_configured": false, 00:08:57.497 "data_offset": 0, 00:08:57.497 "data_size": 0 00:08:57.497 } 00:08:57.497 ] 00:08:57.497 }' 00:08:57.498 23:26:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:57.498 23:26:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.067 23:26:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:58.067 23:26:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:58.067 23:26:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.067 [2024-09-30 23:26:37.713003] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:58.067 [2024-09-30 23:26:37.713104] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring 00:08:58.067 23:26:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:58.067 23:26:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:58.067 23:26:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:58.067 23:26:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.067 [2024-09-30 23:26:37.725011] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:58.067 [2024-09-30 23:26:37.725116] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:58.067 [2024-09-30 23:26:37.725142] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:58.067 [2024-09-30 23:26:37.725165] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:58.067 [2024-09-30 23:26:37.725182] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:58.067 [2024-09-30 23:26:37.725202] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:58.067 23:26:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:58.067 23:26:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:58.067 23:26:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:58.067 23:26:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.067 [2024-09-30 23:26:37.745771] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:58.067 BaseBdev1 00:08:58.067 23:26:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:58.067 23:26:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:58.067 23:26:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:08:58.067 23:26:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:58.067 23:26:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:08:58.068 23:26:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:58.068 23:26:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:58.068 23:26:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:58.068 23:26:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:58.068 23:26:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.068 23:26:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:58.068 23:26:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:58.068 23:26:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:58.068 23:26:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.068 [ 00:08:58.068 { 00:08:58.068 "name": "BaseBdev1", 00:08:58.068 "aliases": [ 00:08:58.068 "300d1278-2229-4df6-a822-39cfe9fc75cc" 00:08:58.068 ], 00:08:58.068 "product_name": "Malloc disk", 00:08:58.068 "block_size": 512, 00:08:58.068 "num_blocks": 65536, 00:08:58.068 "uuid": "300d1278-2229-4df6-a822-39cfe9fc75cc", 00:08:58.068 "assigned_rate_limits": { 00:08:58.068 "rw_ios_per_sec": 0, 00:08:58.068 "rw_mbytes_per_sec": 0, 00:08:58.068 "r_mbytes_per_sec": 0, 00:08:58.068 "w_mbytes_per_sec": 0 00:08:58.068 }, 00:08:58.068 "claimed": true, 00:08:58.068 "claim_type": "exclusive_write", 00:08:58.068 "zoned": false, 00:08:58.068 "supported_io_types": { 00:08:58.068 "read": true, 00:08:58.068 "write": true, 00:08:58.068 "unmap": true, 00:08:58.068 "flush": true, 00:08:58.068 "reset": true, 00:08:58.068 "nvme_admin": false, 00:08:58.068 "nvme_io": false, 00:08:58.068 "nvme_io_md": false, 00:08:58.068 "write_zeroes": true, 00:08:58.068 "zcopy": true, 00:08:58.068 "get_zone_info": false, 00:08:58.068 "zone_management": false, 00:08:58.068 "zone_append": false, 00:08:58.068 "compare": false, 00:08:58.068 "compare_and_write": false, 00:08:58.068 "abort": true, 00:08:58.068 "seek_hole": false, 00:08:58.068 "seek_data": false, 00:08:58.068 "copy": true, 00:08:58.068 "nvme_iov_md": false 00:08:58.068 }, 00:08:58.068 "memory_domains": [ 00:08:58.068 { 00:08:58.068 "dma_device_id": "system", 00:08:58.068 "dma_device_type": 1 00:08:58.068 }, 00:08:58.068 { 00:08:58.068 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:58.068 "dma_device_type": 2 00:08:58.068 } 00:08:58.068 ], 00:08:58.068 "driver_specific": {} 00:08:58.068 } 00:08:58.068 ] 00:08:58.068 23:26:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:58.068 23:26:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:08:58.068 23:26:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:08:58.068 23:26:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:58.068 23:26:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:58.068 23:26:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:58.068 23:26:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:58.068 23:26:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:58.068 23:26:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:58.068 23:26:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:58.068 23:26:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:58.068 23:26:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:58.068 23:26:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:58.068 23:26:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:58.068 23:26:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:58.068 23:26:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.068 23:26:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:58.068 23:26:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:58.068 "name": "Existed_Raid", 00:08:58.068 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:58.068 "strip_size_kb": 0, 00:08:58.068 "state": "configuring", 00:08:58.068 "raid_level": "raid1", 00:08:58.068 "superblock": false, 00:08:58.068 "num_base_bdevs": 3, 00:08:58.068 "num_base_bdevs_discovered": 1, 00:08:58.068 "num_base_bdevs_operational": 3, 00:08:58.068 "base_bdevs_list": [ 00:08:58.068 { 00:08:58.068 "name": "BaseBdev1", 00:08:58.068 "uuid": "300d1278-2229-4df6-a822-39cfe9fc75cc", 00:08:58.068 "is_configured": true, 00:08:58.068 "data_offset": 0, 00:08:58.068 "data_size": 65536 00:08:58.068 }, 00:08:58.068 { 00:08:58.068 "name": "BaseBdev2", 00:08:58.068 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:58.068 "is_configured": false, 00:08:58.068 "data_offset": 0, 00:08:58.068 "data_size": 0 00:08:58.068 }, 00:08:58.068 { 00:08:58.068 "name": "BaseBdev3", 00:08:58.068 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:58.068 "is_configured": false, 00:08:58.068 "data_offset": 0, 00:08:58.068 "data_size": 0 00:08:58.068 } 00:08:58.068 ] 00:08:58.068 }' 00:08:58.068 23:26:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:58.068 23:26:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.638 23:26:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:58.638 23:26:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:58.638 23:26:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.638 [2024-09-30 23:26:38.197013] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:58.638 [2024-09-30 23:26:38.197117] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring 00:08:58.638 23:26:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:58.638 23:26:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:58.638 23:26:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:58.638 23:26:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.638 [2024-09-30 23:26:38.209035] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:58.638 [2024-09-30 23:26:38.210872] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:58.638 [2024-09-30 23:26:38.210947] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:58.638 [2024-09-30 23:26:38.210975] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:58.638 [2024-09-30 23:26:38.210998] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:58.638 23:26:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:58.638 23:26:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:58.638 23:26:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:58.638 23:26:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:08:58.638 23:26:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:58.638 23:26:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:58.638 23:26:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:58.638 23:26:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:58.638 23:26:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:58.638 23:26:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:58.638 23:26:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:58.638 23:26:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:58.638 23:26:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:58.638 23:26:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:58.638 23:26:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:58.638 23:26:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:58.638 23:26:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.638 23:26:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:58.638 23:26:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:58.638 "name": "Existed_Raid", 00:08:58.638 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:58.638 "strip_size_kb": 0, 00:08:58.638 "state": "configuring", 00:08:58.638 "raid_level": "raid1", 00:08:58.638 "superblock": false, 00:08:58.638 "num_base_bdevs": 3, 00:08:58.638 "num_base_bdevs_discovered": 1, 00:08:58.638 "num_base_bdevs_operational": 3, 00:08:58.638 "base_bdevs_list": [ 00:08:58.638 { 00:08:58.638 "name": "BaseBdev1", 00:08:58.638 "uuid": "300d1278-2229-4df6-a822-39cfe9fc75cc", 00:08:58.638 "is_configured": true, 00:08:58.638 "data_offset": 0, 00:08:58.638 "data_size": 65536 00:08:58.638 }, 00:08:58.638 { 00:08:58.638 "name": "BaseBdev2", 00:08:58.639 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:58.639 "is_configured": false, 00:08:58.639 "data_offset": 0, 00:08:58.639 "data_size": 0 00:08:58.639 }, 00:08:58.639 { 00:08:58.639 "name": "BaseBdev3", 00:08:58.639 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:58.639 "is_configured": false, 00:08:58.639 "data_offset": 0, 00:08:58.639 "data_size": 0 00:08:58.639 } 00:08:58.639 ] 00:08:58.639 }' 00:08:58.639 23:26:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:58.639 23:26:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.910 23:26:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:58.910 23:26:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:58.910 23:26:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.910 [2024-09-30 23:26:38.713393] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:58.910 BaseBdev2 00:08:58.910 23:26:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:58.910 23:26:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:58.910 23:26:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:08:58.910 23:26:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:58.910 23:26:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:08:58.910 23:26:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:58.910 23:26:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:58.910 23:26:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:58.910 23:26:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:58.910 23:26:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.910 23:26:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:58.910 23:26:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:58.910 23:26:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:58.910 23:26:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.910 [ 00:08:58.910 { 00:08:58.910 "name": "BaseBdev2", 00:08:58.910 "aliases": [ 00:08:58.910 "23bb0e80-f453-4e82-af88-3912924d7132" 00:08:58.910 ], 00:08:58.910 "product_name": "Malloc disk", 00:08:58.910 "block_size": 512, 00:08:58.910 "num_blocks": 65536, 00:08:58.910 "uuid": "23bb0e80-f453-4e82-af88-3912924d7132", 00:08:58.910 "assigned_rate_limits": { 00:08:58.910 "rw_ios_per_sec": 0, 00:08:58.910 "rw_mbytes_per_sec": 0, 00:08:58.910 "r_mbytes_per_sec": 0, 00:08:58.910 "w_mbytes_per_sec": 0 00:08:58.910 }, 00:08:58.910 "claimed": true, 00:08:58.910 "claim_type": "exclusive_write", 00:08:58.910 "zoned": false, 00:08:58.910 "supported_io_types": { 00:08:58.910 "read": true, 00:08:58.910 "write": true, 00:08:58.910 "unmap": true, 00:08:58.910 "flush": true, 00:08:58.910 "reset": true, 00:08:58.910 "nvme_admin": false, 00:08:58.910 "nvme_io": false, 00:08:58.910 "nvme_io_md": false, 00:08:58.910 "write_zeroes": true, 00:08:58.910 "zcopy": true, 00:08:58.910 "get_zone_info": false, 00:08:58.910 "zone_management": false, 00:08:58.910 "zone_append": false, 00:08:58.910 "compare": false, 00:08:58.910 "compare_and_write": false, 00:08:58.910 "abort": true, 00:08:58.910 "seek_hole": false, 00:08:58.910 "seek_data": false, 00:08:58.910 "copy": true, 00:08:58.910 "nvme_iov_md": false 00:08:58.911 }, 00:08:58.911 "memory_domains": [ 00:08:58.911 { 00:08:58.911 "dma_device_id": "system", 00:08:58.911 "dma_device_type": 1 00:08:58.911 }, 00:08:58.911 { 00:08:58.911 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:58.911 "dma_device_type": 2 00:08:58.911 } 00:08:58.911 ], 00:08:58.911 "driver_specific": {} 00:08:58.911 } 00:08:58.911 ] 00:08:58.911 23:26:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:58.911 23:26:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:08:58.911 23:26:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:58.911 23:26:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:58.911 23:26:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:08:58.911 23:26:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:58.911 23:26:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:58.911 23:26:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:58.911 23:26:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:58.911 23:26:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:58.911 23:26:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:58.911 23:26:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:58.911 23:26:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:58.911 23:26:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:59.189 23:26:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:59.189 23:26:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:59.189 23:26:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:59.189 23:26:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.189 23:26:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:59.189 23:26:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:59.189 "name": "Existed_Raid", 00:08:59.189 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:59.189 "strip_size_kb": 0, 00:08:59.189 "state": "configuring", 00:08:59.189 "raid_level": "raid1", 00:08:59.189 "superblock": false, 00:08:59.189 "num_base_bdevs": 3, 00:08:59.189 "num_base_bdevs_discovered": 2, 00:08:59.189 "num_base_bdevs_operational": 3, 00:08:59.189 "base_bdevs_list": [ 00:08:59.189 { 00:08:59.189 "name": "BaseBdev1", 00:08:59.189 "uuid": "300d1278-2229-4df6-a822-39cfe9fc75cc", 00:08:59.189 "is_configured": true, 00:08:59.189 "data_offset": 0, 00:08:59.189 "data_size": 65536 00:08:59.189 }, 00:08:59.189 { 00:08:59.189 "name": "BaseBdev2", 00:08:59.189 "uuid": "23bb0e80-f453-4e82-af88-3912924d7132", 00:08:59.189 "is_configured": true, 00:08:59.189 "data_offset": 0, 00:08:59.189 "data_size": 65536 00:08:59.189 }, 00:08:59.189 { 00:08:59.189 "name": "BaseBdev3", 00:08:59.189 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:59.189 "is_configured": false, 00:08:59.189 "data_offset": 0, 00:08:59.189 "data_size": 0 00:08:59.189 } 00:08:59.189 ] 00:08:59.189 }' 00:08:59.189 23:26:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:59.189 23:26:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.448 23:26:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:59.448 23:26:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:59.448 23:26:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.448 [2024-09-30 23:26:39.211504] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:59.448 [2024-09-30 23:26:39.211629] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:08:59.448 [2024-09-30 23:26:39.211658] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:08:59.448 [2024-09-30 23:26:39.212001] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:08:59.448 [2024-09-30 23:26:39.212230] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:08:59.448 [2024-09-30 23:26:39.212272] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980 00:08:59.448 [2024-09-30 23:26:39.212516] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:59.448 BaseBdev3 00:08:59.448 23:26:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:59.448 23:26:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:08:59.448 23:26:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:08:59.448 23:26:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:59.448 23:26:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:08:59.448 23:26:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:59.448 23:26:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:59.448 23:26:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:59.448 23:26:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:59.448 23:26:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.448 23:26:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:59.448 23:26:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:59.448 23:26:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:59.448 23:26:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.448 [ 00:08:59.448 { 00:08:59.448 "name": "BaseBdev3", 00:08:59.448 "aliases": [ 00:08:59.448 "39a8b139-05b9-41e4-897c-977ce7480c92" 00:08:59.448 ], 00:08:59.448 "product_name": "Malloc disk", 00:08:59.448 "block_size": 512, 00:08:59.448 "num_blocks": 65536, 00:08:59.448 "uuid": "39a8b139-05b9-41e4-897c-977ce7480c92", 00:08:59.448 "assigned_rate_limits": { 00:08:59.448 "rw_ios_per_sec": 0, 00:08:59.448 "rw_mbytes_per_sec": 0, 00:08:59.448 "r_mbytes_per_sec": 0, 00:08:59.448 "w_mbytes_per_sec": 0 00:08:59.448 }, 00:08:59.448 "claimed": true, 00:08:59.448 "claim_type": "exclusive_write", 00:08:59.448 "zoned": false, 00:08:59.448 "supported_io_types": { 00:08:59.448 "read": true, 00:08:59.448 "write": true, 00:08:59.448 "unmap": true, 00:08:59.448 "flush": true, 00:08:59.448 "reset": true, 00:08:59.448 "nvme_admin": false, 00:08:59.448 "nvme_io": false, 00:08:59.448 "nvme_io_md": false, 00:08:59.448 "write_zeroes": true, 00:08:59.448 "zcopy": true, 00:08:59.448 "get_zone_info": false, 00:08:59.448 "zone_management": false, 00:08:59.448 "zone_append": false, 00:08:59.448 "compare": false, 00:08:59.448 "compare_and_write": false, 00:08:59.448 "abort": true, 00:08:59.448 "seek_hole": false, 00:08:59.448 "seek_data": false, 00:08:59.448 "copy": true, 00:08:59.448 "nvme_iov_md": false 00:08:59.448 }, 00:08:59.448 "memory_domains": [ 00:08:59.448 { 00:08:59.448 "dma_device_id": "system", 00:08:59.448 "dma_device_type": 1 00:08:59.448 }, 00:08:59.448 { 00:08:59.448 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:59.448 "dma_device_type": 2 00:08:59.448 } 00:08:59.448 ], 00:08:59.448 "driver_specific": {} 00:08:59.448 } 00:08:59.448 ] 00:08:59.448 23:26:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:59.448 23:26:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:08:59.448 23:26:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:59.448 23:26:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:59.448 23:26:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:08:59.448 23:26:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:59.448 23:26:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:59.448 23:26:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:59.448 23:26:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:59.448 23:26:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:59.448 23:26:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:59.448 23:26:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:59.448 23:26:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:59.448 23:26:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:59.448 23:26:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:59.448 23:26:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:59.448 23:26:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:59.448 23:26:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.448 23:26:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:59.708 23:26:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:59.708 "name": "Existed_Raid", 00:08:59.708 "uuid": "1ae383bd-3b0e-45b9-9c8f-9b4dfbc1baab", 00:08:59.708 "strip_size_kb": 0, 00:08:59.708 "state": "online", 00:08:59.708 "raid_level": "raid1", 00:08:59.708 "superblock": false, 00:08:59.708 "num_base_bdevs": 3, 00:08:59.708 "num_base_bdevs_discovered": 3, 00:08:59.708 "num_base_bdevs_operational": 3, 00:08:59.708 "base_bdevs_list": [ 00:08:59.708 { 00:08:59.708 "name": "BaseBdev1", 00:08:59.708 "uuid": "300d1278-2229-4df6-a822-39cfe9fc75cc", 00:08:59.708 "is_configured": true, 00:08:59.708 "data_offset": 0, 00:08:59.708 "data_size": 65536 00:08:59.708 }, 00:08:59.708 { 00:08:59.708 "name": "BaseBdev2", 00:08:59.708 "uuid": "23bb0e80-f453-4e82-af88-3912924d7132", 00:08:59.708 "is_configured": true, 00:08:59.708 "data_offset": 0, 00:08:59.708 "data_size": 65536 00:08:59.708 }, 00:08:59.708 { 00:08:59.708 "name": "BaseBdev3", 00:08:59.708 "uuid": "39a8b139-05b9-41e4-897c-977ce7480c92", 00:08:59.708 "is_configured": true, 00:08:59.708 "data_offset": 0, 00:08:59.708 "data_size": 65536 00:08:59.708 } 00:08:59.708 ] 00:08:59.708 }' 00:08:59.708 23:26:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:59.708 23:26:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.967 23:26:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:59.967 23:26:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:59.967 23:26:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:59.967 23:26:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:59.967 23:26:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:59.967 23:26:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:59.967 23:26:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:59.967 23:26:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:59.967 23:26:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:59.967 23:26:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.968 [2024-09-30 23:26:39.687246] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:59.968 23:26:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:59.968 23:26:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:59.968 "name": "Existed_Raid", 00:08:59.968 "aliases": [ 00:08:59.968 "1ae383bd-3b0e-45b9-9c8f-9b4dfbc1baab" 00:08:59.968 ], 00:08:59.968 "product_name": "Raid Volume", 00:08:59.968 "block_size": 512, 00:08:59.968 "num_blocks": 65536, 00:08:59.968 "uuid": "1ae383bd-3b0e-45b9-9c8f-9b4dfbc1baab", 00:08:59.968 "assigned_rate_limits": { 00:08:59.968 "rw_ios_per_sec": 0, 00:08:59.968 "rw_mbytes_per_sec": 0, 00:08:59.968 "r_mbytes_per_sec": 0, 00:08:59.968 "w_mbytes_per_sec": 0 00:08:59.968 }, 00:08:59.968 "claimed": false, 00:08:59.968 "zoned": false, 00:08:59.968 "supported_io_types": { 00:08:59.968 "read": true, 00:08:59.968 "write": true, 00:08:59.968 "unmap": false, 00:08:59.968 "flush": false, 00:08:59.968 "reset": true, 00:08:59.968 "nvme_admin": false, 00:08:59.968 "nvme_io": false, 00:08:59.968 "nvme_io_md": false, 00:08:59.968 "write_zeroes": true, 00:08:59.968 "zcopy": false, 00:08:59.968 "get_zone_info": false, 00:08:59.968 "zone_management": false, 00:08:59.968 "zone_append": false, 00:08:59.968 "compare": false, 00:08:59.968 "compare_and_write": false, 00:08:59.968 "abort": false, 00:08:59.968 "seek_hole": false, 00:08:59.968 "seek_data": false, 00:08:59.968 "copy": false, 00:08:59.968 "nvme_iov_md": false 00:08:59.968 }, 00:08:59.968 "memory_domains": [ 00:08:59.968 { 00:08:59.968 "dma_device_id": "system", 00:08:59.968 "dma_device_type": 1 00:08:59.968 }, 00:08:59.968 { 00:08:59.968 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:59.968 "dma_device_type": 2 00:08:59.968 }, 00:08:59.968 { 00:08:59.968 "dma_device_id": "system", 00:08:59.968 "dma_device_type": 1 00:08:59.968 }, 00:08:59.968 { 00:08:59.968 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:59.968 "dma_device_type": 2 00:08:59.968 }, 00:08:59.968 { 00:08:59.968 "dma_device_id": "system", 00:08:59.968 "dma_device_type": 1 00:08:59.968 }, 00:08:59.968 { 00:08:59.968 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:59.968 "dma_device_type": 2 00:08:59.968 } 00:08:59.968 ], 00:08:59.968 "driver_specific": { 00:08:59.968 "raid": { 00:08:59.968 "uuid": "1ae383bd-3b0e-45b9-9c8f-9b4dfbc1baab", 00:08:59.968 "strip_size_kb": 0, 00:08:59.968 "state": "online", 00:08:59.968 "raid_level": "raid1", 00:08:59.968 "superblock": false, 00:08:59.968 "num_base_bdevs": 3, 00:08:59.968 "num_base_bdevs_discovered": 3, 00:08:59.968 "num_base_bdevs_operational": 3, 00:08:59.968 "base_bdevs_list": [ 00:08:59.968 { 00:08:59.968 "name": "BaseBdev1", 00:08:59.968 "uuid": "300d1278-2229-4df6-a822-39cfe9fc75cc", 00:08:59.968 "is_configured": true, 00:08:59.968 "data_offset": 0, 00:08:59.968 "data_size": 65536 00:08:59.968 }, 00:08:59.968 { 00:08:59.968 "name": "BaseBdev2", 00:08:59.968 "uuid": "23bb0e80-f453-4e82-af88-3912924d7132", 00:08:59.968 "is_configured": true, 00:08:59.968 "data_offset": 0, 00:08:59.968 "data_size": 65536 00:08:59.968 }, 00:08:59.968 { 00:08:59.968 "name": "BaseBdev3", 00:08:59.968 "uuid": "39a8b139-05b9-41e4-897c-977ce7480c92", 00:08:59.968 "is_configured": true, 00:08:59.968 "data_offset": 0, 00:08:59.968 "data_size": 65536 00:08:59.968 } 00:08:59.968 ] 00:08:59.968 } 00:08:59.968 } 00:08:59.968 }' 00:08:59.968 23:26:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:59.968 23:26:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:59.968 BaseBdev2 00:08:59.968 BaseBdev3' 00:08:59.968 23:26:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:59.968 23:26:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:59.968 23:26:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:59.968 23:26:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:59.968 23:26:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:59.968 23:26:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:59.968 23:26:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.228 23:26:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:00.228 23:26:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:00.228 23:26:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:00.228 23:26:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:00.228 23:26:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:00.228 23:26:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:00.228 23:26:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.228 23:26:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:00.228 23:26:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:00.228 23:26:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:00.228 23:26:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:00.228 23:26:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:00.228 23:26:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:00.228 23:26:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:00.228 23:26:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:00.228 23:26:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.228 23:26:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:00.228 23:26:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:00.228 23:26:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:00.228 23:26:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:00.228 23:26:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:00.228 23:26:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.228 [2024-09-30 23:26:39.951044] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:00.228 23:26:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:00.228 23:26:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:09:00.228 23:26:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:09:00.228 23:26:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:00.228 23:26:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:09:00.228 23:26:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:09:00.228 23:26:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:09:00.228 23:26:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:00.228 23:26:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:00.228 23:26:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:00.228 23:26:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:00.228 23:26:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:00.228 23:26:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:00.228 23:26:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:00.228 23:26:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:00.228 23:26:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:00.228 23:26:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:00.229 23:26:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:00.229 23:26:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:00.229 23:26:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.229 23:26:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:00.229 23:26:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:00.229 "name": "Existed_Raid", 00:09:00.229 "uuid": "1ae383bd-3b0e-45b9-9c8f-9b4dfbc1baab", 00:09:00.229 "strip_size_kb": 0, 00:09:00.229 "state": "online", 00:09:00.229 "raid_level": "raid1", 00:09:00.229 "superblock": false, 00:09:00.229 "num_base_bdevs": 3, 00:09:00.229 "num_base_bdevs_discovered": 2, 00:09:00.229 "num_base_bdevs_operational": 2, 00:09:00.229 "base_bdevs_list": [ 00:09:00.229 { 00:09:00.229 "name": null, 00:09:00.229 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:00.229 "is_configured": false, 00:09:00.229 "data_offset": 0, 00:09:00.229 "data_size": 65536 00:09:00.229 }, 00:09:00.229 { 00:09:00.229 "name": "BaseBdev2", 00:09:00.229 "uuid": "23bb0e80-f453-4e82-af88-3912924d7132", 00:09:00.229 "is_configured": true, 00:09:00.229 "data_offset": 0, 00:09:00.229 "data_size": 65536 00:09:00.229 }, 00:09:00.229 { 00:09:00.229 "name": "BaseBdev3", 00:09:00.229 "uuid": "39a8b139-05b9-41e4-897c-977ce7480c92", 00:09:00.229 "is_configured": true, 00:09:00.229 "data_offset": 0, 00:09:00.229 "data_size": 65536 00:09:00.229 } 00:09:00.229 ] 00:09:00.229 }' 00:09:00.229 23:26:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:00.229 23:26:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.798 23:26:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:09:00.798 23:26:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:00.798 23:26:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:00.798 23:26:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:00.798 23:26:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:00.798 23:26:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.798 23:26:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:00.798 23:26:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:00.798 23:26:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:00.798 23:26:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:09:00.798 23:26:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:00.798 23:26:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.798 [2024-09-30 23:26:40.485521] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:00.798 23:26:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:00.798 23:26:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:00.798 23:26:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:00.798 23:26:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:00.798 23:26:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:00.798 23:26:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:00.798 23:26:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.798 23:26:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:00.798 23:26:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:00.798 23:26:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:00.798 23:26:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:09:00.798 23:26:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:00.798 23:26:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.798 [2024-09-30 23:26:40.556789] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:00.798 [2024-09-30 23:26:40.556929] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:00.798 [2024-09-30 23:26:40.568421] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:00.798 [2024-09-30 23:26:40.568540] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:00.798 [2024-09-30 23:26:40.568599] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline 00:09:00.798 23:26:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:00.798 23:26:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:00.798 23:26:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:00.798 23:26:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:00.798 23:26:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:09:00.798 23:26:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:00.798 23:26:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.798 23:26:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:00.798 23:26:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:09:00.798 23:26:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:09:00.798 23:26:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:09:00.798 23:26:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:09:00.798 23:26:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:00.798 23:26:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:00.798 23:26:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:00.798 23:26:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.798 BaseBdev2 00:09:00.798 23:26:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:00.798 23:26:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:09:00.798 23:26:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:09:00.798 23:26:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:00.798 23:26:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:00.798 23:26:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:00.798 23:26:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:00.798 23:26:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:00.798 23:26:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:00.798 23:26:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.798 23:26:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:00.798 23:26:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:00.798 23:26:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:00.798 23:26:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.057 [ 00:09:01.057 { 00:09:01.057 "name": "BaseBdev2", 00:09:01.057 "aliases": [ 00:09:01.057 "e9a8423c-029e-4504-a398-73092c78ef18" 00:09:01.057 ], 00:09:01.057 "product_name": "Malloc disk", 00:09:01.057 "block_size": 512, 00:09:01.057 "num_blocks": 65536, 00:09:01.057 "uuid": "e9a8423c-029e-4504-a398-73092c78ef18", 00:09:01.057 "assigned_rate_limits": { 00:09:01.057 "rw_ios_per_sec": 0, 00:09:01.057 "rw_mbytes_per_sec": 0, 00:09:01.057 "r_mbytes_per_sec": 0, 00:09:01.057 "w_mbytes_per_sec": 0 00:09:01.057 }, 00:09:01.057 "claimed": false, 00:09:01.057 "zoned": false, 00:09:01.057 "supported_io_types": { 00:09:01.057 "read": true, 00:09:01.057 "write": true, 00:09:01.057 "unmap": true, 00:09:01.057 "flush": true, 00:09:01.057 "reset": true, 00:09:01.057 "nvme_admin": false, 00:09:01.057 "nvme_io": false, 00:09:01.057 "nvme_io_md": false, 00:09:01.057 "write_zeroes": true, 00:09:01.057 "zcopy": true, 00:09:01.057 "get_zone_info": false, 00:09:01.057 "zone_management": false, 00:09:01.057 "zone_append": false, 00:09:01.057 "compare": false, 00:09:01.057 "compare_and_write": false, 00:09:01.057 "abort": true, 00:09:01.057 "seek_hole": false, 00:09:01.057 "seek_data": false, 00:09:01.057 "copy": true, 00:09:01.057 "nvme_iov_md": false 00:09:01.057 }, 00:09:01.057 "memory_domains": [ 00:09:01.057 { 00:09:01.057 "dma_device_id": "system", 00:09:01.057 "dma_device_type": 1 00:09:01.057 }, 00:09:01.057 { 00:09:01.057 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:01.057 "dma_device_type": 2 00:09:01.057 } 00:09:01.057 ], 00:09:01.057 "driver_specific": {} 00:09:01.057 } 00:09:01.057 ] 00:09:01.057 23:26:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:01.057 23:26:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:01.057 23:26:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:01.057 23:26:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:01.057 23:26:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:01.057 23:26:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:01.057 23:26:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.057 BaseBdev3 00:09:01.057 23:26:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:01.057 23:26:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:09:01.057 23:26:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:09:01.057 23:26:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:01.057 23:26:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:01.057 23:26:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:01.057 23:26:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:01.057 23:26:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:01.058 23:26:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:01.058 23:26:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.058 23:26:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:01.058 23:26:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:01.058 23:26:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:01.058 23:26:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.058 [ 00:09:01.058 { 00:09:01.058 "name": "BaseBdev3", 00:09:01.058 "aliases": [ 00:09:01.058 "e02ed1f5-f590-4760-bbc6-71d27c7318f4" 00:09:01.058 ], 00:09:01.058 "product_name": "Malloc disk", 00:09:01.058 "block_size": 512, 00:09:01.058 "num_blocks": 65536, 00:09:01.058 "uuid": "e02ed1f5-f590-4760-bbc6-71d27c7318f4", 00:09:01.058 "assigned_rate_limits": { 00:09:01.058 "rw_ios_per_sec": 0, 00:09:01.058 "rw_mbytes_per_sec": 0, 00:09:01.058 "r_mbytes_per_sec": 0, 00:09:01.058 "w_mbytes_per_sec": 0 00:09:01.058 }, 00:09:01.058 "claimed": false, 00:09:01.058 "zoned": false, 00:09:01.058 "supported_io_types": { 00:09:01.058 "read": true, 00:09:01.058 "write": true, 00:09:01.058 "unmap": true, 00:09:01.058 "flush": true, 00:09:01.058 "reset": true, 00:09:01.058 "nvme_admin": false, 00:09:01.058 "nvme_io": false, 00:09:01.058 "nvme_io_md": false, 00:09:01.058 "write_zeroes": true, 00:09:01.058 "zcopy": true, 00:09:01.058 "get_zone_info": false, 00:09:01.058 "zone_management": false, 00:09:01.058 "zone_append": false, 00:09:01.058 "compare": false, 00:09:01.058 "compare_and_write": false, 00:09:01.058 "abort": true, 00:09:01.058 "seek_hole": false, 00:09:01.058 "seek_data": false, 00:09:01.058 "copy": true, 00:09:01.058 "nvme_iov_md": false 00:09:01.058 }, 00:09:01.058 "memory_domains": [ 00:09:01.058 { 00:09:01.058 "dma_device_id": "system", 00:09:01.058 "dma_device_type": 1 00:09:01.058 }, 00:09:01.058 { 00:09:01.058 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:01.058 "dma_device_type": 2 00:09:01.058 } 00:09:01.058 ], 00:09:01.058 "driver_specific": {} 00:09:01.058 } 00:09:01.058 ] 00:09:01.058 23:26:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:01.058 23:26:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:01.058 23:26:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:01.058 23:26:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:01.058 23:26:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:01.058 23:26:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:01.058 23:26:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.058 [2024-09-30 23:26:40.732000] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:01.058 [2024-09-30 23:26:40.732088] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:01.058 [2024-09-30 23:26:40.732131] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:01.058 [2024-09-30 23:26:40.734037] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:01.058 23:26:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:01.058 23:26:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:01.058 23:26:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:01.058 23:26:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:01.058 23:26:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:01.058 23:26:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:01.058 23:26:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:01.058 23:26:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:01.058 23:26:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:01.058 23:26:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:01.058 23:26:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:01.058 23:26:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:01.058 23:26:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:01.058 23:26:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:01.058 23:26:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.058 23:26:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:01.058 23:26:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:01.058 "name": "Existed_Raid", 00:09:01.058 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:01.058 "strip_size_kb": 0, 00:09:01.058 "state": "configuring", 00:09:01.058 "raid_level": "raid1", 00:09:01.058 "superblock": false, 00:09:01.058 "num_base_bdevs": 3, 00:09:01.058 "num_base_bdevs_discovered": 2, 00:09:01.058 "num_base_bdevs_operational": 3, 00:09:01.058 "base_bdevs_list": [ 00:09:01.058 { 00:09:01.058 "name": "BaseBdev1", 00:09:01.058 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:01.058 "is_configured": false, 00:09:01.058 "data_offset": 0, 00:09:01.058 "data_size": 0 00:09:01.058 }, 00:09:01.058 { 00:09:01.058 "name": "BaseBdev2", 00:09:01.058 "uuid": "e9a8423c-029e-4504-a398-73092c78ef18", 00:09:01.058 "is_configured": true, 00:09:01.058 "data_offset": 0, 00:09:01.058 "data_size": 65536 00:09:01.058 }, 00:09:01.058 { 00:09:01.058 "name": "BaseBdev3", 00:09:01.058 "uuid": "e02ed1f5-f590-4760-bbc6-71d27c7318f4", 00:09:01.058 "is_configured": true, 00:09:01.058 "data_offset": 0, 00:09:01.058 "data_size": 65536 00:09:01.058 } 00:09:01.058 ] 00:09:01.058 }' 00:09:01.058 23:26:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:01.058 23:26:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.317 23:26:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:09:01.317 23:26:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:01.317 23:26:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.317 [2024-09-30 23:26:41.167269] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:01.576 23:26:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:01.577 23:26:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:01.577 23:26:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:01.577 23:26:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:01.577 23:26:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:01.577 23:26:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:01.577 23:26:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:01.577 23:26:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:01.577 23:26:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:01.577 23:26:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:01.577 23:26:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:01.577 23:26:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:01.577 23:26:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:01.577 23:26:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:01.577 23:26:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.577 23:26:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:01.577 23:26:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:01.577 "name": "Existed_Raid", 00:09:01.577 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:01.577 "strip_size_kb": 0, 00:09:01.577 "state": "configuring", 00:09:01.577 "raid_level": "raid1", 00:09:01.577 "superblock": false, 00:09:01.577 "num_base_bdevs": 3, 00:09:01.577 "num_base_bdevs_discovered": 1, 00:09:01.577 "num_base_bdevs_operational": 3, 00:09:01.577 "base_bdevs_list": [ 00:09:01.577 { 00:09:01.577 "name": "BaseBdev1", 00:09:01.577 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:01.577 "is_configured": false, 00:09:01.577 "data_offset": 0, 00:09:01.577 "data_size": 0 00:09:01.577 }, 00:09:01.577 { 00:09:01.577 "name": null, 00:09:01.577 "uuid": "e9a8423c-029e-4504-a398-73092c78ef18", 00:09:01.577 "is_configured": false, 00:09:01.577 "data_offset": 0, 00:09:01.577 "data_size": 65536 00:09:01.577 }, 00:09:01.577 { 00:09:01.577 "name": "BaseBdev3", 00:09:01.577 "uuid": "e02ed1f5-f590-4760-bbc6-71d27c7318f4", 00:09:01.577 "is_configured": true, 00:09:01.577 "data_offset": 0, 00:09:01.577 "data_size": 65536 00:09:01.577 } 00:09:01.577 ] 00:09:01.577 }' 00:09:01.577 23:26:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:01.577 23:26:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.836 23:26:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:01.836 23:26:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:01.836 23:26:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.837 23:26:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:01.837 23:26:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:02.095 23:26:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:09:02.095 23:26:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:02.095 23:26:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:02.095 23:26:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.095 [2024-09-30 23:26:41.737930] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:02.095 BaseBdev1 00:09:02.095 23:26:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:02.096 23:26:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:09:02.096 23:26:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:09:02.096 23:26:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:02.096 23:26:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:02.096 23:26:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:02.096 23:26:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:02.096 23:26:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:02.096 23:26:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:02.096 23:26:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.096 23:26:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:02.096 23:26:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:02.096 23:26:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:02.096 23:26:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.096 [ 00:09:02.096 { 00:09:02.096 "name": "BaseBdev1", 00:09:02.096 "aliases": [ 00:09:02.096 "08d03e0a-1dfc-4f1a-a021-1f4114c0ec27" 00:09:02.096 ], 00:09:02.096 "product_name": "Malloc disk", 00:09:02.096 "block_size": 512, 00:09:02.096 "num_blocks": 65536, 00:09:02.096 "uuid": "08d03e0a-1dfc-4f1a-a021-1f4114c0ec27", 00:09:02.096 "assigned_rate_limits": { 00:09:02.096 "rw_ios_per_sec": 0, 00:09:02.096 "rw_mbytes_per_sec": 0, 00:09:02.096 "r_mbytes_per_sec": 0, 00:09:02.096 "w_mbytes_per_sec": 0 00:09:02.096 }, 00:09:02.096 "claimed": true, 00:09:02.096 "claim_type": "exclusive_write", 00:09:02.096 "zoned": false, 00:09:02.096 "supported_io_types": { 00:09:02.096 "read": true, 00:09:02.096 "write": true, 00:09:02.096 "unmap": true, 00:09:02.096 "flush": true, 00:09:02.096 "reset": true, 00:09:02.096 "nvme_admin": false, 00:09:02.096 "nvme_io": false, 00:09:02.096 "nvme_io_md": false, 00:09:02.096 "write_zeroes": true, 00:09:02.096 "zcopy": true, 00:09:02.096 "get_zone_info": false, 00:09:02.096 "zone_management": false, 00:09:02.096 "zone_append": false, 00:09:02.096 "compare": false, 00:09:02.096 "compare_and_write": false, 00:09:02.096 "abort": true, 00:09:02.096 "seek_hole": false, 00:09:02.096 "seek_data": false, 00:09:02.096 "copy": true, 00:09:02.096 "nvme_iov_md": false 00:09:02.096 }, 00:09:02.096 "memory_domains": [ 00:09:02.096 { 00:09:02.096 "dma_device_id": "system", 00:09:02.096 "dma_device_type": 1 00:09:02.096 }, 00:09:02.096 { 00:09:02.096 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:02.096 "dma_device_type": 2 00:09:02.096 } 00:09:02.096 ], 00:09:02.096 "driver_specific": {} 00:09:02.096 } 00:09:02.096 ] 00:09:02.096 23:26:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:02.096 23:26:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:02.096 23:26:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:02.096 23:26:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:02.096 23:26:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:02.096 23:26:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:02.096 23:26:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:02.096 23:26:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:02.096 23:26:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:02.096 23:26:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:02.096 23:26:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:02.096 23:26:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:02.096 23:26:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:02.096 23:26:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:02.096 23:26:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:02.096 23:26:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.096 23:26:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:02.096 23:26:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:02.096 "name": "Existed_Raid", 00:09:02.096 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:02.096 "strip_size_kb": 0, 00:09:02.096 "state": "configuring", 00:09:02.096 "raid_level": "raid1", 00:09:02.096 "superblock": false, 00:09:02.096 "num_base_bdevs": 3, 00:09:02.096 "num_base_bdevs_discovered": 2, 00:09:02.096 "num_base_bdevs_operational": 3, 00:09:02.096 "base_bdevs_list": [ 00:09:02.096 { 00:09:02.096 "name": "BaseBdev1", 00:09:02.096 "uuid": "08d03e0a-1dfc-4f1a-a021-1f4114c0ec27", 00:09:02.096 "is_configured": true, 00:09:02.096 "data_offset": 0, 00:09:02.096 "data_size": 65536 00:09:02.096 }, 00:09:02.096 { 00:09:02.096 "name": null, 00:09:02.096 "uuid": "e9a8423c-029e-4504-a398-73092c78ef18", 00:09:02.096 "is_configured": false, 00:09:02.096 "data_offset": 0, 00:09:02.096 "data_size": 65536 00:09:02.096 }, 00:09:02.096 { 00:09:02.096 "name": "BaseBdev3", 00:09:02.096 "uuid": "e02ed1f5-f590-4760-bbc6-71d27c7318f4", 00:09:02.096 "is_configured": true, 00:09:02.096 "data_offset": 0, 00:09:02.096 "data_size": 65536 00:09:02.096 } 00:09:02.096 ] 00:09:02.096 }' 00:09:02.096 23:26:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:02.096 23:26:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.663 23:26:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:02.663 23:26:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:02.663 23:26:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:02.663 23:26:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.663 23:26:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:02.663 23:26:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:09:02.663 23:26:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:09:02.663 23:26:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:02.663 23:26:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.663 [2024-09-30 23:26:42.269064] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:02.663 23:26:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:02.663 23:26:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:02.663 23:26:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:02.663 23:26:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:02.663 23:26:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:02.663 23:26:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:02.663 23:26:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:02.663 23:26:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:02.663 23:26:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:02.663 23:26:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:02.663 23:26:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:02.663 23:26:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:02.663 23:26:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:02.663 23:26:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:02.663 23:26:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.663 23:26:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:02.663 23:26:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:02.663 "name": "Existed_Raid", 00:09:02.663 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:02.663 "strip_size_kb": 0, 00:09:02.663 "state": "configuring", 00:09:02.663 "raid_level": "raid1", 00:09:02.663 "superblock": false, 00:09:02.663 "num_base_bdevs": 3, 00:09:02.663 "num_base_bdevs_discovered": 1, 00:09:02.663 "num_base_bdevs_operational": 3, 00:09:02.663 "base_bdevs_list": [ 00:09:02.663 { 00:09:02.663 "name": "BaseBdev1", 00:09:02.663 "uuid": "08d03e0a-1dfc-4f1a-a021-1f4114c0ec27", 00:09:02.663 "is_configured": true, 00:09:02.663 "data_offset": 0, 00:09:02.663 "data_size": 65536 00:09:02.663 }, 00:09:02.663 { 00:09:02.663 "name": null, 00:09:02.663 "uuid": "e9a8423c-029e-4504-a398-73092c78ef18", 00:09:02.663 "is_configured": false, 00:09:02.663 "data_offset": 0, 00:09:02.663 "data_size": 65536 00:09:02.663 }, 00:09:02.663 { 00:09:02.663 "name": null, 00:09:02.663 "uuid": "e02ed1f5-f590-4760-bbc6-71d27c7318f4", 00:09:02.663 "is_configured": false, 00:09:02.663 "data_offset": 0, 00:09:02.663 "data_size": 65536 00:09:02.663 } 00:09:02.663 ] 00:09:02.663 }' 00:09:02.663 23:26:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:02.663 23:26:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.922 23:26:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:02.922 23:26:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:02.922 23:26:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.922 23:26:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:02.922 23:26:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:03.182 23:26:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:09:03.182 23:26:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:09:03.182 23:26:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:03.182 23:26:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.182 [2024-09-30 23:26:42.784223] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:03.182 23:26:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:03.182 23:26:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:03.182 23:26:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:03.182 23:26:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:03.182 23:26:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:03.182 23:26:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:03.182 23:26:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:03.182 23:26:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:03.182 23:26:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:03.182 23:26:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:03.182 23:26:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:03.182 23:26:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:03.182 23:26:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:03.182 23:26:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.182 23:26:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:03.182 23:26:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:03.182 23:26:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:03.182 "name": "Existed_Raid", 00:09:03.182 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:03.182 "strip_size_kb": 0, 00:09:03.182 "state": "configuring", 00:09:03.182 "raid_level": "raid1", 00:09:03.182 "superblock": false, 00:09:03.182 "num_base_bdevs": 3, 00:09:03.182 "num_base_bdevs_discovered": 2, 00:09:03.182 "num_base_bdevs_operational": 3, 00:09:03.182 "base_bdevs_list": [ 00:09:03.182 { 00:09:03.182 "name": "BaseBdev1", 00:09:03.182 "uuid": "08d03e0a-1dfc-4f1a-a021-1f4114c0ec27", 00:09:03.182 "is_configured": true, 00:09:03.182 "data_offset": 0, 00:09:03.182 "data_size": 65536 00:09:03.182 }, 00:09:03.182 { 00:09:03.182 "name": null, 00:09:03.182 "uuid": "e9a8423c-029e-4504-a398-73092c78ef18", 00:09:03.182 "is_configured": false, 00:09:03.182 "data_offset": 0, 00:09:03.182 "data_size": 65536 00:09:03.182 }, 00:09:03.182 { 00:09:03.182 "name": "BaseBdev3", 00:09:03.182 "uuid": "e02ed1f5-f590-4760-bbc6-71d27c7318f4", 00:09:03.182 "is_configured": true, 00:09:03.182 "data_offset": 0, 00:09:03.182 "data_size": 65536 00:09:03.182 } 00:09:03.182 ] 00:09:03.182 }' 00:09:03.182 23:26:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:03.182 23:26:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.441 23:26:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:03.441 23:26:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:03.441 23:26:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.441 23:26:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:03.441 23:26:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:03.441 23:26:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:09:03.441 23:26:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:03.441 23:26:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:03.441 23:26:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.699 [2024-09-30 23:26:43.299362] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:03.699 23:26:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:03.699 23:26:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:03.700 23:26:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:03.700 23:26:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:03.700 23:26:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:03.700 23:26:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:03.700 23:26:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:03.700 23:26:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:03.700 23:26:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:03.700 23:26:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:03.700 23:26:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:03.700 23:26:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:03.700 23:26:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:03.700 23:26:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:03.700 23:26:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.700 23:26:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:03.700 23:26:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:03.700 "name": "Existed_Raid", 00:09:03.700 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:03.700 "strip_size_kb": 0, 00:09:03.700 "state": "configuring", 00:09:03.700 "raid_level": "raid1", 00:09:03.700 "superblock": false, 00:09:03.700 "num_base_bdevs": 3, 00:09:03.700 "num_base_bdevs_discovered": 1, 00:09:03.700 "num_base_bdevs_operational": 3, 00:09:03.700 "base_bdevs_list": [ 00:09:03.700 { 00:09:03.700 "name": null, 00:09:03.700 "uuid": "08d03e0a-1dfc-4f1a-a021-1f4114c0ec27", 00:09:03.700 "is_configured": false, 00:09:03.700 "data_offset": 0, 00:09:03.700 "data_size": 65536 00:09:03.700 }, 00:09:03.700 { 00:09:03.700 "name": null, 00:09:03.700 "uuid": "e9a8423c-029e-4504-a398-73092c78ef18", 00:09:03.700 "is_configured": false, 00:09:03.700 "data_offset": 0, 00:09:03.700 "data_size": 65536 00:09:03.700 }, 00:09:03.700 { 00:09:03.700 "name": "BaseBdev3", 00:09:03.700 "uuid": "e02ed1f5-f590-4760-bbc6-71d27c7318f4", 00:09:03.700 "is_configured": true, 00:09:03.700 "data_offset": 0, 00:09:03.700 "data_size": 65536 00:09:03.700 } 00:09:03.700 ] 00:09:03.700 }' 00:09:03.700 23:26:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:03.700 23:26:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.959 23:26:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:03.959 23:26:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:03.959 23:26:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:03.959 23:26:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.217 23:26:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:04.217 23:26:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:09:04.217 23:26:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:09:04.217 23:26:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:04.217 23:26:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.217 [2024-09-30 23:26:43.837191] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:04.217 23:26:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:04.217 23:26:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:04.217 23:26:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:04.217 23:26:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:04.217 23:26:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:04.217 23:26:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:04.217 23:26:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:04.217 23:26:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:04.217 23:26:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:04.217 23:26:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:04.217 23:26:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:04.217 23:26:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:04.217 23:26:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:04.217 23:26:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:04.217 23:26:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.217 23:26:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:04.217 23:26:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:04.217 "name": "Existed_Raid", 00:09:04.217 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:04.217 "strip_size_kb": 0, 00:09:04.217 "state": "configuring", 00:09:04.217 "raid_level": "raid1", 00:09:04.217 "superblock": false, 00:09:04.217 "num_base_bdevs": 3, 00:09:04.217 "num_base_bdevs_discovered": 2, 00:09:04.217 "num_base_bdevs_operational": 3, 00:09:04.217 "base_bdevs_list": [ 00:09:04.217 { 00:09:04.217 "name": null, 00:09:04.217 "uuid": "08d03e0a-1dfc-4f1a-a021-1f4114c0ec27", 00:09:04.217 "is_configured": false, 00:09:04.217 "data_offset": 0, 00:09:04.217 "data_size": 65536 00:09:04.217 }, 00:09:04.217 { 00:09:04.217 "name": "BaseBdev2", 00:09:04.217 "uuid": "e9a8423c-029e-4504-a398-73092c78ef18", 00:09:04.217 "is_configured": true, 00:09:04.217 "data_offset": 0, 00:09:04.217 "data_size": 65536 00:09:04.217 }, 00:09:04.217 { 00:09:04.217 "name": "BaseBdev3", 00:09:04.217 "uuid": "e02ed1f5-f590-4760-bbc6-71d27c7318f4", 00:09:04.218 "is_configured": true, 00:09:04.218 "data_offset": 0, 00:09:04.218 "data_size": 65536 00:09:04.218 } 00:09:04.218 ] 00:09:04.218 }' 00:09:04.218 23:26:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:04.218 23:26:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.476 23:26:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:04.476 23:26:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:04.476 23:26:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:04.476 23:26:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.736 23:26:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:04.736 23:26:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:09:04.736 23:26:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:04.736 23:26:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:09:04.736 23:26:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:04.736 23:26:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.736 23:26:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:04.736 23:26:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 08d03e0a-1dfc-4f1a-a021-1f4114c0ec27 00:09:04.736 23:26:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:04.736 23:26:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.736 [2024-09-30 23:26:44.407206] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:09:04.736 [2024-09-30 23:26:44.407328] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:09:04.736 [2024-09-30 23:26:44.407354] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:09:04.736 [2024-09-30 23:26:44.407648] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:09:04.736 [2024-09-30 23:26:44.407839] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:09:04.736 [2024-09-30 23:26:44.407898] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006d00 00:09:04.736 [2024-09-30 23:26:44.408112] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:04.736 NewBaseBdev 00:09:04.736 23:26:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:04.736 23:26:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:09:04.736 23:26:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:09:04.736 23:26:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:04.736 23:26:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:04.736 23:26:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:04.736 23:26:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:04.736 23:26:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:04.737 23:26:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:04.737 23:26:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.737 23:26:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:04.737 23:26:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:09:04.737 23:26:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:04.737 23:26:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.737 [ 00:09:04.737 { 00:09:04.737 "name": "NewBaseBdev", 00:09:04.737 "aliases": [ 00:09:04.737 "08d03e0a-1dfc-4f1a-a021-1f4114c0ec27" 00:09:04.737 ], 00:09:04.737 "product_name": "Malloc disk", 00:09:04.737 "block_size": 512, 00:09:04.737 "num_blocks": 65536, 00:09:04.737 "uuid": "08d03e0a-1dfc-4f1a-a021-1f4114c0ec27", 00:09:04.737 "assigned_rate_limits": { 00:09:04.737 "rw_ios_per_sec": 0, 00:09:04.737 "rw_mbytes_per_sec": 0, 00:09:04.737 "r_mbytes_per_sec": 0, 00:09:04.737 "w_mbytes_per_sec": 0 00:09:04.737 }, 00:09:04.737 "claimed": true, 00:09:04.737 "claim_type": "exclusive_write", 00:09:04.737 "zoned": false, 00:09:04.737 "supported_io_types": { 00:09:04.737 "read": true, 00:09:04.737 "write": true, 00:09:04.737 "unmap": true, 00:09:04.737 "flush": true, 00:09:04.737 "reset": true, 00:09:04.737 "nvme_admin": false, 00:09:04.737 "nvme_io": false, 00:09:04.737 "nvme_io_md": false, 00:09:04.737 "write_zeroes": true, 00:09:04.737 "zcopy": true, 00:09:04.737 "get_zone_info": false, 00:09:04.737 "zone_management": false, 00:09:04.737 "zone_append": false, 00:09:04.737 "compare": false, 00:09:04.737 "compare_and_write": false, 00:09:04.737 "abort": true, 00:09:04.737 "seek_hole": false, 00:09:04.737 "seek_data": false, 00:09:04.737 "copy": true, 00:09:04.737 "nvme_iov_md": false 00:09:04.737 }, 00:09:04.737 "memory_domains": [ 00:09:04.737 { 00:09:04.737 "dma_device_id": "system", 00:09:04.737 "dma_device_type": 1 00:09:04.737 }, 00:09:04.737 { 00:09:04.737 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:04.737 "dma_device_type": 2 00:09:04.737 } 00:09:04.737 ], 00:09:04.737 "driver_specific": {} 00:09:04.737 } 00:09:04.737 ] 00:09:04.737 23:26:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:04.737 23:26:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:04.737 23:26:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:09:04.737 23:26:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:04.737 23:26:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:04.737 23:26:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:04.737 23:26:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:04.737 23:26:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:04.737 23:26:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:04.737 23:26:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:04.737 23:26:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:04.737 23:26:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:04.737 23:26:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:04.737 23:26:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:04.737 23:26:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:04.737 23:26:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.737 23:26:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:04.737 23:26:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:04.737 "name": "Existed_Raid", 00:09:04.737 "uuid": "d58929ef-ef61-41d3-9cbb-4ed1e2852b39", 00:09:04.737 "strip_size_kb": 0, 00:09:04.737 "state": "online", 00:09:04.737 "raid_level": "raid1", 00:09:04.737 "superblock": false, 00:09:04.737 "num_base_bdevs": 3, 00:09:04.737 "num_base_bdevs_discovered": 3, 00:09:04.737 "num_base_bdevs_operational": 3, 00:09:04.737 "base_bdevs_list": [ 00:09:04.737 { 00:09:04.737 "name": "NewBaseBdev", 00:09:04.737 "uuid": "08d03e0a-1dfc-4f1a-a021-1f4114c0ec27", 00:09:04.737 "is_configured": true, 00:09:04.737 "data_offset": 0, 00:09:04.737 "data_size": 65536 00:09:04.737 }, 00:09:04.737 { 00:09:04.737 "name": "BaseBdev2", 00:09:04.737 "uuid": "e9a8423c-029e-4504-a398-73092c78ef18", 00:09:04.737 "is_configured": true, 00:09:04.737 "data_offset": 0, 00:09:04.737 "data_size": 65536 00:09:04.737 }, 00:09:04.737 { 00:09:04.737 "name": "BaseBdev3", 00:09:04.737 "uuid": "e02ed1f5-f590-4760-bbc6-71d27c7318f4", 00:09:04.737 "is_configured": true, 00:09:04.737 "data_offset": 0, 00:09:04.737 "data_size": 65536 00:09:04.737 } 00:09:04.737 ] 00:09:04.737 }' 00:09:04.737 23:26:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:04.737 23:26:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.306 23:26:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:09:05.306 23:26:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:05.306 23:26:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:05.306 23:26:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:05.306 23:26:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:05.306 23:26:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:05.306 23:26:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:05.306 23:26:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:05.306 23:26:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:05.306 23:26:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.306 [2024-09-30 23:26:44.918654] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:05.306 23:26:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:05.306 23:26:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:05.306 "name": "Existed_Raid", 00:09:05.306 "aliases": [ 00:09:05.306 "d58929ef-ef61-41d3-9cbb-4ed1e2852b39" 00:09:05.306 ], 00:09:05.306 "product_name": "Raid Volume", 00:09:05.306 "block_size": 512, 00:09:05.306 "num_blocks": 65536, 00:09:05.306 "uuid": "d58929ef-ef61-41d3-9cbb-4ed1e2852b39", 00:09:05.306 "assigned_rate_limits": { 00:09:05.306 "rw_ios_per_sec": 0, 00:09:05.306 "rw_mbytes_per_sec": 0, 00:09:05.306 "r_mbytes_per_sec": 0, 00:09:05.306 "w_mbytes_per_sec": 0 00:09:05.306 }, 00:09:05.306 "claimed": false, 00:09:05.306 "zoned": false, 00:09:05.306 "supported_io_types": { 00:09:05.306 "read": true, 00:09:05.306 "write": true, 00:09:05.306 "unmap": false, 00:09:05.306 "flush": false, 00:09:05.306 "reset": true, 00:09:05.306 "nvme_admin": false, 00:09:05.306 "nvme_io": false, 00:09:05.306 "nvme_io_md": false, 00:09:05.306 "write_zeroes": true, 00:09:05.306 "zcopy": false, 00:09:05.306 "get_zone_info": false, 00:09:05.306 "zone_management": false, 00:09:05.306 "zone_append": false, 00:09:05.306 "compare": false, 00:09:05.306 "compare_and_write": false, 00:09:05.306 "abort": false, 00:09:05.306 "seek_hole": false, 00:09:05.306 "seek_data": false, 00:09:05.306 "copy": false, 00:09:05.306 "nvme_iov_md": false 00:09:05.306 }, 00:09:05.306 "memory_domains": [ 00:09:05.306 { 00:09:05.306 "dma_device_id": "system", 00:09:05.306 "dma_device_type": 1 00:09:05.306 }, 00:09:05.306 { 00:09:05.306 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:05.306 "dma_device_type": 2 00:09:05.306 }, 00:09:05.306 { 00:09:05.306 "dma_device_id": "system", 00:09:05.306 "dma_device_type": 1 00:09:05.306 }, 00:09:05.306 { 00:09:05.306 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:05.306 "dma_device_type": 2 00:09:05.306 }, 00:09:05.306 { 00:09:05.306 "dma_device_id": "system", 00:09:05.306 "dma_device_type": 1 00:09:05.306 }, 00:09:05.306 { 00:09:05.306 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:05.306 "dma_device_type": 2 00:09:05.306 } 00:09:05.306 ], 00:09:05.306 "driver_specific": { 00:09:05.306 "raid": { 00:09:05.306 "uuid": "d58929ef-ef61-41d3-9cbb-4ed1e2852b39", 00:09:05.306 "strip_size_kb": 0, 00:09:05.306 "state": "online", 00:09:05.306 "raid_level": "raid1", 00:09:05.306 "superblock": false, 00:09:05.306 "num_base_bdevs": 3, 00:09:05.306 "num_base_bdevs_discovered": 3, 00:09:05.306 "num_base_bdevs_operational": 3, 00:09:05.306 "base_bdevs_list": [ 00:09:05.306 { 00:09:05.306 "name": "NewBaseBdev", 00:09:05.306 "uuid": "08d03e0a-1dfc-4f1a-a021-1f4114c0ec27", 00:09:05.306 "is_configured": true, 00:09:05.306 "data_offset": 0, 00:09:05.306 "data_size": 65536 00:09:05.306 }, 00:09:05.306 { 00:09:05.306 "name": "BaseBdev2", 00:09:05.306 "uuid": "e9a8423c-029e-4504-a398-73092c78ef18", 00:09:05.306 "is_configured": true, 00:09:05.306 "data_offset": 0, 00:09:05.306 "data_size": 65536 00:09:05.306 }, 00:09:05.306 { 00:09:05.306 "name": "BaseBdev3", 00:09:05.306 "uuid": "e02ed1f5-f590-4760-bbc6-71d27c7318f4", 00:09:05.306 "is_configured": true, 00:09:05.306 "data_offset": 0, 00:09:05.306 "data_size": 65536 00:09:05.306 } 00:09:05.306 ] 00:09:05.306 } 00:09:05.306 } 00:09:05.306 }' 00:09:05.306 23:26:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:05.306 23:26:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:09:05.306 BaseBdev2 00:09:05.306 BaseBdev3' 00:09:05.306 23:26:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:05.306 23:26:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:05.306 23:26:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:05.306 23:26:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:09:05.306 23:26:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:05.306 23:26:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:05.306 23:26:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.306 23:26:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:05.306 23:26:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:05.306 23:26:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:05.306 23:26:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:05.306 23:26:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:05.306 23:26:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:05.307 23:26:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:05.307 23:26:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.307 23:26:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:05.307 23:26:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:05.307 23:26:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:05.307 23:26:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:05.307 23:26:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:05.307 23:26:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:05.307 23:26:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:05.307 23:26:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.566 23:26:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:05.566 23:26:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:05.566 23:26:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:05.566 23:26:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:05.566 23:26:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:05.566 23:26:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.566 [2024-09-30 23:26:45.181942] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:05.566 [2024-09-30 23:26:45.182009] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:05.566 [2024-09-30 23:26:45.182098] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:05.566 [2024-09-30 23:26:45.182374] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:05.566 [2024-09-30 23:26:45.182424] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name Existed_Raid, state offline 00:09:05.566 23:26:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:05.566 23:26:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 78497 00:09:05.566 23:26:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 78497 ']' 00:09:05.566 23:26:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 78497 00:09:05.566 23:26:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:09:05.566 23:26:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:05.566 23:26:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 78497 00:09:05.566 23:26:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:05.566 23:26:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:05.566 23:26:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 78497' 00:09:05.566 killing process with pid 78497 00:09:05.566 23:26:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 78497 00:09:05.566 [2024-09-30 23:26:45.232798] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:05.566 23:26:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 78497 00:09:05.566 [2024-09-30 23:26:45.264309] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:05.827 23:26:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:09:05.827 00:09:05.827 real 0m9.172s 00:09:05.827 user 0m15.627s 00:09:05.827 sys 0m1.951s 00:09:05.827 ************************************ 00:09:05.827 END TEST raid_state_function_test 00:09:05.827 ************************************ 00:09:05.827 23:26:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:05.827 23:26:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.827 23:26:45 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 3 true 00:09:05.827 23:26:45 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:09:05.827 23:26:45 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:05.827 23:26:45 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:05.827 ************************************ 00:09:05.827 START TEST raid_state_function_test_sb 00:09:05.827 ************************************ 00:09:05.827 23:26:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 3 true 00:09:05.827 23:26:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:09:05.827 23:26:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:09:05.827 23:26:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:09:05.827 23:26:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:05.827 23:26:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:05.827 23:26:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:05.827 23:26:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:05.827 23:26:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:05.827 23:26:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:05.827 23:26:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:05.827 23:26:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:05.827 23:26:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:05.827 23:26:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:09:05.827 23:26:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:05.827 23:26:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:05.827 23:26:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:05.827 23:26:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:05.827 23:26:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:05.827 23:26:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:05.827 23:26:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:05.827 23:26:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:05.827 23:26:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:09:05.827 23:26:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:09:05.827 23:26:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:09:05.827 23:26:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:09:05.827 23:26:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=79107 00:09:05.827 23:26:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:05.827 23:26:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 79107' 00:09:05.827 Process raid pid: 79107 00:09:05.827 23:26:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 79107 00:09:05.827 23:26:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 79107 ']' 00:09:05.827 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:05.827 23:26:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:05.827 23:26:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:05.827 23:26:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:05.827 23:26:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:05.827 23:26:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:06.087 [2024-09-30 23:26:45.686163] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:09:06.087 [2024-09-30 23:26:45.686307] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:06.087 [2024-09-30 23:26:45.846471] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:06.087 [2024-09-30 23:26:45.891229] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:06.087 [2024-09-30 23:26:45.933402] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:06.087 [2024-09-30 23:26:45.933440] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:06.655 23:26:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:06.655 23:26:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:09:06.655 23:26:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:06.655 23:26:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:06.655 23:26:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:06.655 [2024-09-30 23:26:46.495039] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:06.655 [2024-09-30 23:26:46.495094] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:06.655 [2024-09-30 23:26:46.495106] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:06.655 [2024-09-30 23:26:46.495116] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:06.655 [2024-09-30 23:26:46.495123] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:06.655 [2024-09-30 23:26:46.495146] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:06.655 23:26:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:06.655 23:26:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:06.655 23:26:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:06.655 23:26:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:06.655 23:26:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:06.655 23:26:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:06.655 23:26:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:06.655 23:26:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:06.655 23:26:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:06.655 23:26:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:06.655 23:26:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:06.655 23:26:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:06.655 23:26:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:06.655 23:26:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:06.655 23:26:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:06.915 23:26:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:06.915 23:26:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:06.915 "name": "Existed_Raid", 00:09:06.915 "uuid": "ba988bec-1729-42f1-afd8-9dab86b2bd06", 00:09:06.915 "strip_size_kb": 0, 00:09:06.915 "state": "configuring", 00:09:06.915 "raid_level": "raid1", 00:09:06.915 "superblock": true, 00:09:06.915 "num_base_bdevs": 3, 00:09:06.915 "num_base_bdevs_discovered": 0, 00:09:06.915 "num_base_bdevs_operational": 3, 00:09:06.915 "base_bdevs_list": [ 00:09:06.915 { 00:09:06.915 "name": "BaseBdev1", 00:09:06.915 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:06.915 "is_configured": false, 00:09:06.915 "data_offset": 0, 00:09:06.915 "data_size": 0 00:09:06.915 }, 00:09:06.915 { 00:09:06.915 "name": "BaseBdev2", 00:09:06.915 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:06.915 "is_configured": false, 00:09:06.915 "data_offset": 0, 00:09:06.915 "data_size": 0 00:09:06.915 }, 00:09:06.915 { 00:09:06.915 "name": "BaseBdev3", 00:09:06.915 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:06.915 "is_configured": false, 00:09:06.915 "data_offset": 0, 00:09:06.915 "data_size": 0 00:09:06.915 } 00:09:06.915 ] 00:09:06.915 }' 00:09:06.915 23:26:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:06.915 23:26:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:07.175 23:26:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:07.175 23:26:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:07.175 23:26:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:07.175 [2024-09-30 23:26:46.946133] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:07.175 [2024-09-30 23:26:46.946243] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring 00:09:07.175 23:26:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:07.175 23:26:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:07.175 23:26:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:07.175 23:26:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:07.175 [2024-09-30 23:26:46.958150] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:07.175 [2024-09-30 23:26:46.958241] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:07.175 [2024-09-30 23:26:46.958268] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:07.175 [2024-09-30 23:26:46.958290] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:07.175 [2024-09-30 23:26:46.958308] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:07.175 [2024-09-30 23:26:46.958328] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:07.175 23:26:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:07.175 23:26:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:07.175 23:26:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:07.175 23:26:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:07.175 [2024-09-30 23:26:46.979062] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:07.175 BaseBdev1 00:09:07.175 23:26:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:07.175 23:26:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:09:07.175 23:26:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:09:07.175 23:26:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:07.175 23:26:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:07.175 23:26:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:07.175 23:26:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:07.175 23:26:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:07.175 23:26:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:07.175 23:26:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:07.175 23:26:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:07.175 23:26:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:07.175 23:26:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:07.175 23:26:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:07.175 [ 00:09:07.175 { 00:09:07.175 "name": "BaseBdev1", 00:09:07.175 "aliases": [ 00:09:07.175 "6b5e803c-dbd6-405f-adfb-3bcc3348d9d7" 00:09:07.175 ], 00:09:07.175 "product_name": "Malloc disk", 00:09:07.175 "block_size": 512, 00:09:07.175 "num_blocks": 65536, 00:09:07.175 "uuid": "6b5e803c-dbd6-405f-adfb-3bcc3348d9d7", 00:09:07.175 "assigned_rate_limits": { 00:09:07.175 "rw_ios_per_sec": 0, 00:09:07.175 "rw_mbytes_per_sec": 0, 00:09:07.175 "r_mbytes_per_sec": 0, 00:09:07.175 "w_mbytes_per_sec": 0 00:09:07.175 }, 00:09:07.175 "claimed": true, 00:09:07.175 "claim_type": "exclusive_write", 00:09:07.175 "zoned": false, 00:09:07.175 "supported_io_types": { 00:09:07.175 "read": true, 00:09:07.175 "write": true, 00:09:07.175 "unmap": true, 00:09:07.175 "flush": true, 00:09:07.175 "reset": true, 00:09:07.175 "nvme_admin": false, 00:09:07.175 "nvme_io": false, 00:09:07.175 "nvme_io_md": false, 00:09:07.175 "write_zeroes": true, 00:09:07.175 "zcopy": true, 00:09:07.175 "get_zone_info": false, 00:09:07.175 "zone_management": false, 00:09:07.175 "zone_append": false, 00:09:07.175 "compare": false, 00:09:07.175 "compare_and_write": false, 00:09:07.175 "abort": true, 00:09:07.175 "seek_hole": false, 00:09:07.175 "seek_data": false, 00:09:07.175 "copy": true, 00:09:07.175 "nvme_iov_md": false 00:09:07.175 }, 00:09:07.175 "memory_domains": [ 00:09:07.175 { 00:09:07.175 "dma_device_id": "system", 00:09:07.175 "dma_device_type": 1 00:09:07.175 }, 00:09:07.175 { 00:09:07.175 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:07.175 "dma_device_type": 2 00:09:07.175 } 00:09:07.175 ], 00:09:07.175 "driver_specific": {} 00:09:07.175 } 00:09:07.175 ] 00:09:07.175 23:26:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:07.175 23:26:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:07.175 23:26:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:07.175 23:26:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:07.175 23:26:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:07.175 23:26:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:07.175 23:26:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:07.175 23:26:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:07.175 23:26:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:07.175 23:26:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:07.175 23:26:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:07.175 23:26:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:07.175 23:26:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:07.175 23:26:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:07.175 23:26:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:07.175 23:26:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:07.434 23:26:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:07.434 23:26:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:07.434 "name": "Existed_Raid", 00:09:07.434 "uuid": "75ef343d-6b95-4d93-ba72-c2d239451fb8", 00:09:07.434 "strip_size_kb": 0, 00:09:07.434 "state": "configuring", 00:09:07.434 "raid_level": "raid1", 00:09:07.434 "superblock": true, 00:09:07.434 "num_base_bdevs": 3, 00:09:07.434 "num_base_bdevs_discovered": 1, 00:09:07.434 "num_base_bdevs_operational": 3, 00:09:07.434 "base_bdevs_list": [ 00:09:07.434 { 00:09:07.434 "name": "BaseBdev1", 00:09:07.434 "uuid": "6b5e803c-dbd6-405f-adfb-3bcc3348d9d7", 00:09:07.434 "is_configured": true, 00:09:07.434 "data_offset": 2048, 00:09:07.434 "data_size": 63488 00:09:07.434 }, 00:09:07.434 { 00:09:07.434 "name": "BaseBdev2", 00:09:07.434 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:07.434 "is_configured": false, 00:09:07.434 "data_offset": 0, 00:09:07.434 "data_size": 0 00:09:07.434 }, 00:09:07.434 { 00:09:07.434 "name": "BaseBdev3", 00:09:07.434 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:07.434 "is_configured": false, 00:09:07.434 "data_offset": 0, 00:09:07.434 "data_size": 0 00:09:07.434 } 00:09:07.434 ] 00:09:07.434 }' 00:09:07.434 23:26:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:07.434 23:26:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:07.694 23:26:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:07.694 23:26:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:07.694 23:26:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:07.694 [2024-09-30 23:26:47.490307] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:07.694 [2024-09-30 23:26:47.490357] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring 00:09:07.694 23:26:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:07.694 23:26:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:07.694 23:26:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:07.694 23:26:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:07.694 [2024-09-30 23:26:47.502325] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:07.694 [2024-09-30 23:26:47.504159] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:07.694 [2024-09-30 23:26:47.504200] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:07.694 [2024-09-30 23:26:47.504209] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:07.694 [2024-09-30 23:26:47.504219] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:07.694 23:26:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:07.694 23:26:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:09:07.694 23:26:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:07.694 23:26:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:07.694 23:26:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:07.694 23:26:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:07.694 23:26:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:07.694 23:26:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:07.694 23:26:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:07.694 23:26:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:07.694 23:26:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:07.694 23:26:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:07.694 23:26:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:07.694 23:26:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:07.694 23:26:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:07.694 23:26:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:07.694 23:26:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:07.694 23:26:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:07.954 23:26:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:07.954 "name": "Existed_Raid", 00:09:07.954 "uuid": "d26357f8-fa91-4eb8-be3e-a07c570b3eac", 00:09:07.954 "strip_size_kb": 0, 00:09:07.954 "state": "configuring", 00:09:07.954 "raid_level": "raid1", 00:09:07.954 "superblock": true, 00:09:07.954 "num_base_bdevs": 3, 00:09:07.954 "num_base_bdevs_discovered": 1, 00:09:07.954 "num_base_bdevs_operational": 3, 00:09:07.954 "base_bdevs_list": [ 00:09:07.954 { 00:09:07.954 "name": "BaseBdev1", 00:09:07.954 "uuid": "6b5e803c-dbd6-405f-adfb-3bcc3348d9d7", 00:09:07.954 "is_configured": true, 00:09:07.954 "data_offset": 2048, 00:09:07.954 "data_size": 63488 00:09:07.954 }, 00:09:07.954 { 00:09:07.954 "name": "BaseBdev2", 00:09:07.954 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:07.954 "is_configured": false, 00:09:07.954 "data_offset": 0, 00:09:07.954 "data_size": 0 00:09:07.954 }, 00:09:07.954 { 00:09:07.954 "name": "BaseBdev3", 00:09:07.954 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:07.954 "is_configured": false, 00:09:07.954 "data_offset": 0, 00:09:07.954 "data_size": 0 00:09:07.954 } 00:09:07.954 ] 00:09:07.954 }' 00:09:07.954 23:26:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:07.954 23:26:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:08.214 23:26:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:08.214 23:26:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:08.214 23:26:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:08.214 [2024-09-30 23:26:47.954300] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:08.214 BaseBdev2 00:09:08.214 23:26:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:08.214 23:26:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:09:08.214 23:26:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:09:08.214 23:26:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:08.214 23:26:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:08.214 23:26:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:08.214 23:26:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:08.214 23:26:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:08.214 23:26:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:08.214 23:26:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:08.214 23:26:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:08.214 23:26:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:08.214 23:26:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:08.214 23:26:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:08.214 [ 00:09:08.214 { 00:09:08.214 "name": "BaseBdev2", 00:09:08.214 "aliases": [ 00:09:08.214 "5b21d39d-3402-4bf8-8b3b-051c9ba75651" 00:09:08.214 ], 00:09:08.214 "product_name": "Malloc disk", 00:09:08.214 "block_size": 512, 00:09:08.214 "num_blocks": 65536, 00:09:08.214 "uuid": "5b21d39d-3402-4bf8-8b3b-051c9ba75651", 00:09:08.214 "assigned_rate_limits": { 00:09:08.214 "rw_ios_per_sec": 0, 00:09:08.214 "rw_mbytes_per_sec": 0, 00:09:08.214 "r_mbytes_per_sec": 0, 00:09:08.214 "w_mbytes_per_sec": 0 00:09:08.214 }, 00:09:08.214 "claimed": true, 00:09:08.214 "claim_type": "exclusive_write", 00:09:08.214 "zoned": false, 00:09:08.214 "supported_io_types": { 00:09:08.214 "read": true, 00:09:08.214 "write": true, 00:09:08.214 "unmap": true, 00:09:08.214 "flush": true, 00:09:08.214 "reset": true, 00:09:08.214 "nvme_admin": false, 00:09:08.214 "nvme_io": false, 00:09:08.214 "nvme_io_md": false, 00:09:08.214 "write_zeroes": true, 00:09:08.214 "zcopy": true, 00:09:08.214 "get_zone_info": false, 00:09:08.214 "zone_management": false, 00:09:08.214 "zone_append": false, 00:09:08.214 "compare": false, 00:09:08.214 "compare_and_write": false, 00:09:08.214 "abort": true, 00:09:08.214 "seek_hole": false, 00:09:08.214 "seek_data": false, 00:09:08.214 "copy": true, 00:09:08.214 "nvme_iov_md": false 00:09:08.214 }, 00:09:08.215 "memory_domains": [ 00:09:08.215 { 00:09:08.215 "dma_device_id": "system", 00:09:08.215 "dma_device_type": 1 00:09:08.215 }, 00:09:08.215 { 00:09:08.215 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:08.215 "dma_device_type": 2 00:09:08.215 } 00:09:08.215 ], 00:09:08.215 "driver_specific": {} 00:09:08.215 } 00:09:08.215 ] 00:09:08.215 23:26:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:08.215 23:26:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:08.215 23:26:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:08.215 23:26:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:08.215 23:26:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:08.215 23:26:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:08.215 23:26:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:08.215 23:26:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:08.215 23:26:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:08.215 23:26:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:08.215 23:26:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:08.215 23:26:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:08.215 23:26:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:08.215 23:26:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:08.215 23:26:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:08.215 23:26:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:08.215 23:26:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:08.215 23:26:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:08.215 23:26:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:08.215 23:26:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:08.215 "name": "Existed_Raid", 00:09:08.215 "uuid": "d26357f8-fa91-4eb8-be3e-a07c570b3eac", 00:09:08.215 "strip_size_kb": 0, 00:09:08.215 "state": "configuring", 00:09:08.215 "raid_level": "raid1", 00:09:08.215 "superblock": true, 00:09:08.215 "num_base_bdevs": 3, 00:09:08.215 "num_base_bdevs_discovered": 2, 00:09:08.215 "num_base_bdevs_operational": 3, 00:09:08.215 "base_bdevs_list": [ 00:09:08.215 { 00:09:08.215 "name": "BaseBdev1", 00:09:08.215 "uuid": "6b5e803c-dbd6-405f-adfb-3bcc3348d9d7", 00:09:08.215 "is_configured": true, 00:09:08.215 "data_offset": 2048, 00:09:08.215 "data_size": 63488 00:09:08.215 }, 00:09:08.215 { 00:09:08.215 "name": "BaseBdev2", 00:09:08.215 "uuid": "5b21d39d-3402-4bf8-8b3b-051c9ba75651", 00:09:08.215 "is_configured": true, 00:09:08.215 "data_offset": 2048, 00:09:08.215 "data_size": 63488 00:09:08.215 }, 00:09:08.215 { 00:09:08.215 "name": "BaseBdev3", 00:09:08.215 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:08.215 "is_configured": false, 00:09:08.215 "data_offset": 0, 00:09:08.215 "data_size": 0 00:09:08.215 } 00:09:08.215 ] 00:09:08.215 }' 00:09:08.215 23:26:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:08.215 23:26:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:08.784 23:26:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:08.784 23:26:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:08.784 23:26:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:08.784 BaseBdev3 00:09:08.784 [2024-09-30 23:26:48.472473] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:08.784 [2024-09-30 23:26:48.472680] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:09:08.784 [2024-09-30 23:26:48.472705] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:08.784 [2024-09-30 23:26:48.473003] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:09:08.784 [2024-09-30 23:26:48.473135] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:09:08.784 [2024-09-30 23:26:48.473146] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980 00:09:08.784 [2024-09-30 23:26:48.473264] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:08.784 23:26:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:08.784 23:26:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:09:08.784 23:26:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:09:08.784 23:26:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:08.784 23:26:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:08.784 23:26:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:08.784 23:26:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:08.784 23:26:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:08.784 23:26:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:08.784 23:26:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:08.784 23:26:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:08.784 23:26:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:08.784 23:26:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:08.784 23:26:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:08.784 [ 00:09:08.784 { 00:09:08.784 "name": "BaseBdev3", 00:09:08.784 "aliases": [ 00:09:08.784 "13d32ff8-4c5f-42d9-9760-ee460f9f2a04" 00:09:08.784 ], 00:09:08.784 "product_name": "Malloc disk", 00:09:08.784 "block_size": 512, 00:09:08.784 "num_blocks": 65536, 00:09:08.784 "uuid": "13d32ff8-4c5f-42d9-9760-ee460f9f2a04", 00:09:08.784 "assigned_rate_limits": { 00:09:08.784 "rw_ios_per_sec": 0, 00:09:08.784 "rw_mbytes_per_sec": 0, 00:09:08.784 "r_mbytes_per_sec": 0, 00:09:08.784 "w_mbytes_per_sec": 0 00:09:08.784 }, 00:09:08.784 "claimed": true, 00:09:08.784 "claim_type": "exclusive_write", 00:09:08.784 "zoned": false, 00:09:08.784 "supported_io_types": { 00:09:08.784 "read": true, 00:09:08.784 "write": true, 00:09:08.784 "unmap": true, 00:09:08.784 "flush": true, 00:09:08.784 "reset": true, 00:09:08.784 "nvme_admin": false, 00:09:08.784 "nvme_io": false, 00:09:08.784 "nvme_io_md": false, 00:09:08.784 "write_zeroes": true, 00:09:08.784 "zcopy": true, 00:09:08.784 "get_zone_info": false, 00:09:08.784 "zone_management": false, 00:09:08.784 "zone_append": false, 00:09:08.784 "compare": false, 00:09:08.784 "compare_and_write": false, 00:09:08.784 "abort": true, 00:09:08.784 "seek_hole": false, 00:09:08.784 "seek_data": false, 00:09:08.784 "copy": true, 00:09:08.784 "nvme_iov_md": false 00:09:08.784 }, 00:09:08.784 "memory_domains": [ 00:09:08.784 { 00:09:08.784 "dma_device_id": "system", 00:09:08.784 "dma_device_type": 1 00:09:08.784 }, 00:09:08.784 { 00:09:08.784 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:08.784 "dma_device_type": 2 00:09:08.784 } 00:09:08.784 ], 00:09:08.784 "driver_specific": {} 00:09:08.784 } 00:09:08.784 ] 00:09:08.784 23:26:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:08.784 23:26:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:08.784 23:26:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:08.784 23:26:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:08.784 23:26:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:09:08.784 23:26:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:08.784 23:26:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:08.784 23:26:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:08.784 23:26:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:08.784 23:26:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:08.784 23:26:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:08.784 23:26:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:08.784 23:26:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:08.784 23:26:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:08.784 23:26:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:08.784 23:26:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:08.784 23:26:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:08.784 23:26:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:08.784 23:26:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:08.784 23:26:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:08.784 "name": "Existed_Raid", 00:09:08.784 "uuid": "d26357f8-fa91-4eb8-be3e-a07c570b3eac", 00:09:08.784 "strip_size_kb": 0, 00:09:08.784 "state": "online", 00:09:08.784 "raid_level": "raid1", 00:09:08.784 "superblock": true, 00:09:08.784 "num_base_bdevs": 3, 00:09:08.784 "num_base_bdevs_discovered": 3, 00:09:08.784 "num_base_bdevs_operational": 3, 00:09:08.784 "base_bdevs_list": [ 00:09:08.784 { 00:09:08.784 "name": "BaseBdev1", 00:09:08.785 "uuid": "6b5e803c-dbd6-405f-adfb-3bcc3348d9d7", 00:09:08.785 "is_configured": true, 00:09:08.785 "data_offset": 2048, 00:09:08.785 "data_size": 63488 00:09:08.785 }, 00:09:08.785 { 00:09:08.785 "name": "BaseBdev2", 00:09:08.785 "uuid": "5b21d39d-3402-4bf8-8b3b-051c9ba75651", 00:09:08.785 "is_configured": true, 00:09:08.785 "data_offset": 2048, 00:09:08.785 "data_size": 63488 00:09:08.785 }, 00:09:08.785 { 00:09:08.785 "name": "BaseBdev3", 00:09:08.785 "uuid": "13d32ff8-4c5f-42d9-9760-ee460f9f2a04", 00:09:08.785 "is_configured": true, 00:09:08.785 "data_offset": 2048, 00:09:08.785 "data_size": 63488 00:09:08.785 } 00:09:08.785 ] 00:09:08.785 }' 00:09:08.785 23:26:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:08.785 23:26:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:09.355 23:26:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:09:09.355 23:26:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:09.355 23:26:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:09.355 23:26:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:09.355 23:26:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:09:09.355 23:26:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:09.355 23:26:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:09.355 23:26:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:09.355 23:26:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.356 23:26:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:09.356 [2024-09-30 23:26:48.964029] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:09.356 23:26:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:09.356 23:26:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:09.356 "name": "Existed_Raid", 00:09:09.356 "aliases": [ 00:09:09.356 "d26357f8-fa91-4eb8-be3e-a07c570b3eac" 00:09:09.356 ], 00:09:09.356 "product_name": "Raid Volume", 00:09:09.356 "block_size": 512, 00:09:09.356 "num_blocks": 63488, 00:09:09.356 "uuid": "d26357f8-fa91-4eb8-be3e-a07c570b3eac", 00:09:09.356 "assigned_rate_limits": { 00:09:09.356 "rw_ios_per_sec": 0, 00:09:09.356 "rw_mbytes_per_sec": 0, 00:09:09.356 "r_mbytes_per_sec": 0, 00:09:09.356 "w_mbytes_per_sec": 0 00:09:09.356 }, 00:09:09.356 "claimed": false, 00:09:09.356 "zoned": false, 00:09:09.356 "supported_io_types": { 00:09:09.356 "read": true, 00:09:09.356 "write": true, 00:09:09.356 "unmap": false, 00:09:09.356 "flush": false, 00:09:09.356 "reset": true, 00:09:09.356 "nvme_admin": false, 00:09:09.356 "nvme_io": false, 00:09:09.356 "nvme_io_md": false, 00:09:09.356 "write_zeroes": true, 00:09:09.356 "zcopy": false, 00:09:09.356 "get_zone_info": false, 00:09:09.356 "zone_management": false, 00:09:09.356 "zone_append": false, 00:09:09.356 "compare": false, 00:09:09.356 "compare_and_write": false, 00:09:09.356 "abort": false, 00:09:09.356 "seek_hole": false, 00:09:09.356 "seek_data": false, 00:09:09.356 "copy": false, 00:09:09.356 "nvme_iov_md": false 00:09:09.356 }, 00:09:09.356 "memory_domains": [ 00:09:09.356 { 00:09:09.356 "dma_device_id": "system", 00:09:09.356 "dma_device_type": 1 00:09:09.356 }, 00:09:09.356 { 00:09:09.356 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:09.356 "dma_device_type": 2 00:09:09.356 }, 00:09:09.356 { 00:09:09.356 "dma_device_id": "system", 00:09:09.356 "dma_device_type": 1 00:09:09.356 }, 00:09:09.356 { 00:09:09.356 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:09.356 "dma_device_type": 2 00:09:09.356 }, 00:09:09.356 { 00:09:09.356 "dma_device_id": "system", 00:09:09.356 "dma_device_type": 1 00:09:09.356 }, 00:09:09.356 { 00:09:09.356 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:09.356 "dma_device_type": 2 00:09:09.356 } 00:09:09.356 ], 00:09:09.356 "driver_specific": { 00:09:09.356 "raid": { 00:09:09.356 "uuid": "d26357f8-fa91-4eb8-be3e-a07c570b3eac", 00:09:09.356 "strip_size_kb": 0, 00:09:09.356 "state": "online", 00:09:09.356 "raid_level": "raid1", 00:09:09.356 "superblock": true, 00:09:09.356 "num_base_bdevs": 3, 00:09:09.356 "num_base_bdevs_discovered": 3, 00:09:09.356 "num_base_bdevs_operational": 3, 00:09:09.356 "base_bdevs_list": [ 00:09:09.356 { 00:09:09.356 "name": "BaseBdev1", 00:09:09.356 "uuid": "6b5e803c-dbd6-405f-adfb-3bcc3348d9d7", 00:09:09.356 "is_configured": true, 00:09:09.356 "data_offset": 2048, 00:09:09.356 "data_size": 63488 00:09:09.356 }, 00:09:09.356 { 00:09:09.356 "name": "BaseBdev2", 00:09:09.356 "uuid": "5b21d39d-3402-4bf8-8b3b-051c9ba75651", 00:09:09.356 "is_configured": true, 00:09:09.356 "data_offset": 2048, 00:09:09.356 "data_size": 63488 00:09:09.356 }, 00:09:09.356 { 00:09:09.356 "name": "BaseBdev3", 00:09:09.356 "uuid": "13d32ff8-4c5f-42d9-9760-ee460f9f2a04", 00:09:09.356 "is_configured": true, 00:09:09.356 "data_offset": 2048, 00:09:09.356 "data_size": 63488 00:09:09.356 } 00:09:09.356 ] 00:09:09.356 } 00:09:09.356 } 00:09:09.356 }' 00:09:09.356 23:26:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:09.356 23:26:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:09:09.356 BaseBdev2 00:09:09.356 BaseBdev3' 00:09:09.356 23:26:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:09.356 23:26:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:09.356 23:26:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:09.356 23:26:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:09:09.356 23:26:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:09.356 23:26:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.356 23:26:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:09.356 23:26:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:09.356 23:26:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:09.356 23:26:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:09.356 23:26:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:09.356 23:26:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:09.356 23:26:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.356 23:26:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:09.356 23:26:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:09.356 23:26:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:09.356 23:26:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:09.356 23:26:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:09.356 23:26:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:09.356 23:26:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:09.356 23:26:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.356 23:26:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:09.356 23:26:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:09.617 23:26:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:09.617 23:26:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:09.617 23:26:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:09.617 23:26:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:09.617 23:26:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.617 23:26:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:09.617 [2024-09-30 23:26:49.239358] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:09.617 23:26:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:09.617 23:26:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:09:09.617 23:26:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:09:09.617 23:26:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:09.617 23:26:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:09:09.617 23:26:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:09:09.617 23:26:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:09:09.617 23:26:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:09.617 23:26:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:09.617 23:26:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:09.617 23:26:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:09.617 23:26:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:09.617 23:26:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:09.617 23:26:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:09.617 23:26:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:09.617 23:26:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:09.617 23:26:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:09.617 23:26:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.617 23:26:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:09.617 23:26:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:09.617 23:26:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:09.617 23:26:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:09.617 "name": "Existed_Raid", 00:09:09.617 "uuid": "d26357f8-fa91-4eb8-be3e-a07c570b3eac", 00:09:09.617 "strip_size_kb": 0, 00:09:09.617 "state": "online", 00:09:09.617 "raid_level": "raid1", 00:09:09.617 "superblock": true, 00:09:09.617 "num_base_bdevs": 3, 00:09:09.617 "num_base_bdevs_discovered": 2, 00:09:09.617 "num_base_bdevs_operational": 2, 00:09:09.617 "base_bdevs_list": [ 00:09:09.617 { 00:09:09.617 "name": null, 00:09:09.617 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:09.617 "is_configured": false, 00:09:09.617 "data_offset": 0, 00:09:09.617 "data_size": 63488 00:09:09.617 }, 00:09:09.617 { 00:09:09.617 "name": "BaseBdev2", 00:09:09.617 "uuid": "5b21d39d-3402-4bf8-8b3b-051c9ba75651", 00:09:09.617 "is_configured": true, 00:09:09.617 "data_offset": 2048, 00:09:09.617 "data_size": 63488 00:09:09.617 }, 00:09:09.617 { 00:09:09.617 "name": "BaseBdev3", 00:09:09.617 "uuid": "13d32ff8-4c5f-42d9-9760-ee460f9f2a04", 00:09:09.617 "is_configured": true, 00:09:09.617 "data_offset": 2048, 00:09:09.617 "data_size": 63488 00:09:09.617 } 00:09:09.617 ] 00:09:09.617 }' 00:09:09.617 23:26:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:09.617 23:26:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:09.877 23:26:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:09:09.877 23:26:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:09.877 23:26:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:09.877 23:26:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:09.877 23:26:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.877 23:26:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:09.877 23:26:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:09.877 23:26:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:09.877 23:26:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:09.877 23:26:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:09:09.877 23:26:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.877 23:26:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:09.877 [2024-09-30 23:26:49.709925] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:09.877 23:26:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:09.877 23:26:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:09.877 23:26:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:09.877 23:26:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:09.877 23:26:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:09.877 23:26:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.877 23:26:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:10.138 23:26:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:10.138 23:26:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:10.138 23:26:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:10.138 23:26:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:09:10.138 23:26:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:10.138 23:26:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:10.138 [2024-09-30 23:26:49.761142] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:10.138 [2024-09-30 23:26:49.761322] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:10.138 [2024-09-30 23:26:49.772779] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:10.138 [2024-09-30 23:26:49.772907] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:10.138 [2024-09-30 23:26:49.772927] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline 00:09:10.138 23:26:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:10.138 23:26:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:10.138 23:26:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:10.138 23:26:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:10.138 23:26:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:09:10.138 23:26:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:10.138 23:26:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:10.138 23:26:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:10.138 23:26:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:09:10.138 23:26:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:09:10.138 23:26:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:09:10.138 23:26:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:09:10.138 23:26:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:10.139 23:26:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:10.139 23:26:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:10.139 23:26:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:10.139 BaseBdev2 00:09:10.139 23:26:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:10.139 23:26:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:09:10.139 23:26:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:09:10.139 23:26:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:10.139 23:26:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:10.139 23:26:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:10.139 23:26:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:10.139 23:26:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:10.139 23:26:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:10.139 23:26:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:10.139 23:26:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:10.139 23:26:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:10.139 23:26:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:10.139 23:26:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:10.139 [ 00:09:10.139 { 00:09:10.139 "name": "BaseBdev2", 00:09:10.139 "aliases": [ 00:09:10.139 "6370f854-83bd-4ce6-9b3e-01a6498e7ad0" 00:09:10.139 ], 00:09:10.139 "product_name": "Malloc disk", 00:09:10.139 "block_size": 512, 00:09:10.139 "num_blocks": 65536, 00:09:10.139 "uuid": "6370f854-83bd-4ce6-9b3e-01a6498e7ad0", 00:09:10.139 "assigned_rate_limits": { 00:09:10.139 "rw_ios_per_sec": 0, 00:09:10.139 "rw_mbytes_per_sec": 0, 00:09:10.139 "r_mbytes_per_sec": 0, 00:09:10.139 "w_mbytes_per_sec": 0 00:09:10.139 }, 00:09:10.139 "claimed": false, 00:09:10.139 "zoned": false, 00:09:10.139 "supported_io_types": { 00:09:10.139 "read": true, 00:09:10.139 "write": true, 00:09:10.139 "unmap": true, 00:09:10.139 "flush": true, 00:09:10.139 "reset": true, 00:09:10.139 "nvme_admin": false, 00:09:10.139 "nvme_io": false, 00:09:10.139 "nvme_io_md": false, 00:09:10.139 "write_zeroes": true, 00:09:10.139 "zcopy": true, 00:09:10.139 "get_zone_info": false, 00:09:10.139 "zone_management": false, 00:09:10.139 "zone_append": false, 00:09:10.139 "compare": false, 00:09:10.139 "compare_and_write": false, 00:09:10.139 "abort": true, 00:09:10.139 "seek_hole": false, 00:09:10.139 "seek_data": false, 00:09:10.139 "copy": true, 00:09:10.139 "nvme_iov_md": false 00:09:10.139 }, 00:09:10.139 "memory_domains": [ 00:09:10.139 { 00:09:10.139 "dma_device_id": "system", 00:09:10.139 "dma_device_type": 1 00:09:10.139 }, 00:09:10.139 { 00:09:10.139 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:10.139 "dma_device_type": 2 00:09:10.139 } 00:09:10.139 ], 00:09:10.139 "driver_specific": {} 00:09:10.139 } 00:09:10.139 ] 00:09:10.139 23:26:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:10.139 23:26:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:10.139 23:26:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:10.139 23:26:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:10.139 23:26:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:10.139 23:26:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:10.139 23:26:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:10.139 BaseBdev3 00:09:10.139 23:26:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:10.139 23:26:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:09:10.139 23:26:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:09:10.139 23:26:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:10.139 23:26:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:10.139 23:26:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:10.139 23:26:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:10.139 23:26:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:10.139 23:26:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:10.139 23:26:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:10.139 23:26:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:10.139 23:26:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:10.139 23:26:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:10.139 23:26:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:10.139 [ 00:09:10.139 { 00:09:10.139 "name": "BaseBdev3", 00:09:10.139 "aliases": [ 00:09:10.139 "cc25caa5-99a6-4be9-b9a9-516e008c2b95" 00:09:10.139 ], 00:09:10.139 "product_name": "Malloc disk", 00:09:10.139 "block_size": 512, 00:09:10.139 "num_blocks": 65536, 00:09:10.139 "uuid": "cc25caa5-99a6-4be9-b9a9-516e008c2b95", 00:09:10.139 "assigned_rate_limits": { 00:09:10.139 "rw_ios_per_sec": 0, 00:09:10.139 "rw_mbytes_per_sec": 0, 00:09:10.139 "r_mbytes_per_sec": 0, 00:09:10.139 "w_mbytes_per_sec": 0 00:09:10.139 }, 00:09:10.139 "claimed": false, 00:09:10.139 "zoned": false, 00:09:10.139 "supported_io_types": { 00:09:10.139 "read": true, 00:09:10.139 "write": true, 00:09:10.139 "unmap": true, 00:09:10.139 "flush": true, 00:09:10.139 "reset": true, 00:09:10.139 "nvme_admin": false, 00:09:10.139 "nvme_io": false, 00:09:10.139 "nvme_io_md": false, 00:09:10.139 "write_zeroes": true, 00:09:10.139 "zcopy": true, 00:09:10.139 "get_zone_info": false, 00:09:10.139 "zone_management": false, 00:09:10.139 "zone_append": false, 00:09:10.139 "compare": false, 00:09:10.139 "compare_and_write": false, 00:09:10.139 "abort": true, 00:09:10.139 "seek_hole": false, 00:09:10.139 "seek_data": false, 00:09:10.139 "copy": true, 00:09:10.139 "nvme_iov_md": false 00:09:10.139 }, 00:09:10.139 "memory_domains": [ 00:09:10.139 { 00:09:10.139 "dma_device_id": "system", 00:09:10.139 "dma_device_type": 1 00:09:10.139 }, 00:09:10.139 { 00:09:10.139 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:10.139 "dma_device_type": 2 00:09:10.139 } 00:09:10.139 ], 00:09:10.139 "driver_specific": {} 00:09:10.139 } 00:09:10.139 ] 00:09:10.139 23:26:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:10.139 23:26:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:10.139 23:26:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:10.139 23:26:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:10.139 23:26:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:10.139 23:26:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:10.139 23:26:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:10.139 [2024-09-30 23:26:49.931497] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:10.139 [2024-09-30 23:26:49.931613] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:10.139 [2024-09-30 23:26:49.931652] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:10.139 [2024-09-30 23:26:49.933493] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:10.139 23:26:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:10.139 23:26:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:10.139 23:26:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:10.139 23:26:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:10.139 23:26:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:10.139 23:26:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:10.139 23:26:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:10.139 23:26:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:10.139 23:26:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:10.139 23:26:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:10.139 23:26:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:10.139 23:26:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:10.139 23:26:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:10.139 23:26:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:10.139 23:26:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:10.139 23:26:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:10.139 23:26:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:10.139 "name": "Existed_Raid", 00:09:10.139 "uuid": "d2fbd8c7-00fc-43ea-ab48-d4e0629944dd", 00:09:10.139 "strip_size_kb": 0, 00:09:10.139 "state": "configuring", 00:09:10.139 "raid_level": "raid1", 00:09:10.139 "superblock": true, 00:09:10.139 "num_base_bdevs": 3, 00:09:10.140 "num_base_bdevs_discovered": 2, 00:09:10.140 "num_base_bdevs_operational": 3, 00:09:10.140 "base_bdevs_list": [ 00:09:10.140 { 00:09:10.140 "name": "BaseBdev1", 00:09:10.140 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:10.140 "is_configured": false, 00:09:10.140 "data_offset": 0, 00:09:10.140 "data_size": 0 00:09:10.140 }, 00:09:10.140 { 00:09:10.140 "name": "BaseBdev2", 00:09:10.140 "uuid": "6370f854-83bd-4ce6-9b3e-01a6498e7ad0", 00:09:10.140 "is_configured": true, 00:09:10.140 "data_offset": 2048, 00:09:10.140 "data_size": 63488 00:09:10.140 }, 00:09:10.140 { 00:09:10.140 "name": "BaseBdev3", 00:09:10.140 "uuid": "cc25caa5-99a6-4be9-b9a9-516e008c2b95", 00:09:10.140 "is_configured": true, 00:09:10.140 "data_offset": 2048, 00:09:10.140 "data_size": 63488 00:09:10.140 } 00:09:10.140 ] 00:09:10.140 }' 00:09:10.140 23:26:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:10.140 23:26:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:10.708 23:26:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:09:10.708 23:26:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:10.708 23:26:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:10.708 [2024-09-30 23:26:50.382788] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:10.708 23:26:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:10.708 23:26:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:10.708 23:26:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:10.708 23:26:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:10.708 23:26:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:10.708 23:26:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:10.708 23:26:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:10.708 23:26:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:10.708 23:26:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:10.708 23:26:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:10.708 23:26:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:10.708 23:26:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:10.708 23:26:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:10.708 23:26:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:10.708 23:26:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:10.708 23:26:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:10.708 23:26:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:10.708 "name": "Existed_Raid", 00:09:10.708 "uuid": "d2fbd8c7-00fc-43ea-ab48-d4e0629944dd", 00:09:10.708 "strip_size_kb": 0, 00:09:10.708 "state": "configuring", 00:09:10.708 "raid_level": "raid1", 00:09:10.708 "superblock": true, 00:09:10.708 "num_base_bdevs": 3, 00:09:10.708 "num_base_bdevs_discovered": 1, 00:09:10.708 "num_base_bdevs_operational": 3, 00:09:10.708 "base_bdevs_list": [ 00:09:10.708 { 00:09:10.708 "name": "BaseBdev1", 00:09:10.708 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:10.708 "is_configured": false, 00:09:10.708 "data_offset": 0, 00:09:10.708 "data_size": 0 00:09:10.708 }, 00:09:10.708 { 00:09:10.708 "name": null, 00:09:10.708 "uuid": "6370f854-83bd-4ce6-9b3e-01a6498e7ad0", 00:09:10.708 "is_configured": false, 00:09:10.708 "data_offset": 0, 00:09:10.708 "data_size": 63488 00:09:10.708 }, 00:09:10.708 { 00:09:10.708 "name": "BaseBdev3", 00:09:10.708 "uuid": "cc25caa5-99a6-4be9-b9a9-516e008c2b95", 00:09:10.708 "is_configured": true, 00:09:10.708 "data_offset": 2048, 00:09:10.708 "data_size": 63488 00:09:10.708 } 00:09:10.708 ] 00:09:10.708 }' 00:09:10.708 23:26:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:10.708 23:26:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:11.000 23:26:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:11.000 23:26:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:11.000 23:26:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.000 23:26:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:11.000 23:26:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.000 23:26:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:09:11.001 23:26:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:11.001 23:26:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.001 23:26:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:11.269 [2024-09-30 23:26:50.856765] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:11.269 BaseBdev1 00:09:11.269 23:26:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.269 23:26:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:09:11.269 23:26:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:09:11.269 23:26:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:11.269 23:26:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:11.269 23:26:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:11.269 23:26:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:11.269 23:26:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:11.269 23:26:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.269 23:26:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:11.269 23:26:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.269 23:26:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:11.269 23:26:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.269 23:26:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:11.269 [ 00:09:11.269 { 00:09:11.269 "name": "BaseBdev1", 00:09:11.269 "aliases": [ 00:09:11.269 "0ecdbd38-3ec6-41d7-af89-ac2427932785" 00:09:11.269 ], 00:09:11.269 "product_name": "Malloc disk", 00:09:11.269 "block_size": 512, 00:09:11.269 "num_blocks": 65536, 00:09:11.269 "uuid": "0ecdbd38-3ec6-41d7-af89-ac2427932785", 00:09:11.269 "assigned_rate_limits": { 00:09:11.269 "rw_ios_per_sec": 0, 00:09:11.269 "rw_mbytes_per_sec": 0, 00:09:11.269 "r_mbytes_per_sec": 0, 00:09:11.269 "w_mbytes_per_sec": 0 00:09:11.269 }, 00:09:11.269 "claimed": true, 00:09:11.269 "claim_type": "exclusive_write", 00:09:11.269 "zoned": false, 00:09:11.269 "supported_io_types": { 00:09:11.269 "read": true, 00:09:11.269 "write": true, 00:09:11.269 "unmap": true, 00:09:11.269 "flush": true, 00:09:11.269 "reset": true, 00:09:11.269 "nvme_admin": false, 00:09:11.269 "nvme_io": false, 00:09:11.269 "nvme_io_md": false, 00:09:11.269 "write_zeroes": true, 00:09:11.269 "zcopy": true, 00:09:11.269 "get_zone_info": false, 00:09:11.269 "zone_management": false, 00:09:11.269 "zone_append": false, 00:09:11.269 "compare": false, 00:09:11.269 "compare_and_write": false, 00:09:11.269 "abort": true, 00:09:11.269 "seek_hole": false, 00:09:11.269 "seek_data": false, 00:09:11.269 "copy": true, 00:09:11.269 "nvme_iov_md": false 00:09:11.269 }, 00:09:11.269 "memory_domains": [ 00:09:11.269 { 00:09:11.269 "dma_device_id": "system", 00:09:11.269 "dma_device_type": 1 00:09:11.269 }, 00:09:11.269 { 00:09:11.269 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:11.269 "dma_device_type": 2 00:09:11.269 } 00:09:11.269 ], 00:09:11.269 "driver_specific": {} 00:09:11.269 } 00:09:11.269 ] 00:09:11.269 23:26:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.269 23:26:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:11.269 23:26:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:11.269 23:26:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:11.269 23:26:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:11.269 23:26:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:11.269 23:26:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:11.269 23:26:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:11.269 23:26:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:11.269 23:26:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:11.269 23:26:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:11.269 23:26:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:11.269 23:26:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:11.269 23:26:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.269 23:26:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:11.269 23:26:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:11.269 23:26:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.269 23:26:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:11.269 "name": "Existed_Raid", 00:09:11.269 "uuid": "d2fbd8c7-00fc-43ea-ab48-d4e0629944dd", 00:09:11.269 "strip_size_kb": 0, 00:09:11.269 "state": "configuring", 00:09:11.270 "raid_level": "raid1", 00:09:11.270 "superblock": true, 00:09:11.270 "num_base_bdevs": 3, 00:09:11.270 "num_base_bdevs_discovered": 2, 00:09:11.270 "num_base_bdevs_operational": 3, 00:09:11.270 "base_bdevs_list": [ 00:09:11.270 { 00:09:11.270 "name": "BaseBdev1", 00:09:11.270 "uuid": "0ecdbd38-3ec6-41d7-af89-ac2427932785", 00:09:11.270 "is_configured": true, 00:09:11.270 "data_offset": 2048, 00:09:11.270 "data_size": 63488 00:09:11.270 }, 00:09:11.270 { 00:09:11.270 "name": null, 00:09:11.270 "uuid": "6370f854-83bd-4ce6-9b3e-01a6498e7ad0", 00:09:11.270 "is_configured": false, 00:09:11.270 "data_offset": 0, 00:09:11.270 "data_size": 63488 00:09:11.270 }, 00:09:11.270 { 00:09:11.270 "name": "BaseBdev3", 00:09:11.270 "uuid": "cc25caa5-99a6-4be9-b9a9-516e008c2b95", 00:09:11.270 "is_configured": true, 00:09:11.270 "data_offset": 2048, 00:09:11.270 "data_size": 63488 00:09:11.270 } 00:09:11.270 ] 00:09:11.270 }' 00:09:11.270 23:26:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:11.270 23:26:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:11.528 23:26:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:11.528 23:26:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:11.528 23:26:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.528 23:26:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:11.528 23:26:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.786 23:26:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:09:11.786 23:26:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:09:11.786 23:26:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.786 23:26:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:11.786 [2024-09-30 23:26:51.395888] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:11.786 23:26:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.786 23:26:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:11.786 23:26:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:11.786 23:26:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:11.786 23:26:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:11.786 23:26:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:11.786 23:26:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:11.786 23:26:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:11.786 23:26:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:11.786 23:26:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:11.786 23:26:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:11.786 23:26:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:11.786 23:26:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:11.786 23:26:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.786 23:26:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:11.786 23:26:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.786 23:26:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:11.786 "name": "Existed_Raid", 00:09:11.786 "uuid": "d2fbd8c7-00fc-43ea-ab48-d4e0629944dd", 00:09:11.786 "strip_size_kb": 0, 00:09:11.786 "state": "configuring", 00:09:11.786 "raid_level": "raid1", 00:09:11.786 "superblock": true, 00:09:11.786 "num_base_bdevs": 3, 00:09:11.786 "num_base_bdevs_discovered": 1, 00:09:11.786 "num_base_bdevs_operational": 3, 00:09:11.786 "base_bdevs_list": [ 00:09:11.786 { 00:09:11.786 "name": "BaseBdev1", 00:09:11.786 "uuid": "0ecdbd38-3ec6-41d7-af89-ac2427932785", 00:09:11.786 "is_configured": true, 00:09:11.786 "data_offset": 2048, 00:09:11.786 "data_size": 63488 00:09:11.786 }, 00:09:11.786 { 00:09:11.786 "name": null, 00:09:11.786 "uuid": "6370f854-83bd-4ce6-9b3e-01a6498e7ad0", 00:09:11.786 "is_configured": false, 00:09:11.786 "data_offset": 0, 00:09:11.786 "data_size": 63488 00:09:11.786 }, 00:09:11.786 { 00:09:11.786 "name": null, 00:09:11.786 "uuid": "cc25caa5-99a6-4be9-b9a9-516e008c2b95", 00:09:11.786 "is_configured": false, 00:09:11.786 "data_offset": 0, 00:09:11.786 "data_size": 63488 00:09:11.786 } 00:09:11.786 ] 00:09:11.786 }' 00:09:11.786 23:26:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:11.786 23:26:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:12.045 23:26:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:12.045 23:26:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:12.045 23:26:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:12.045 23:26:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:12.045 23:26:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:12.045 23:26:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:09:12.045 23:26:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:09:12.045 23:26:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:12.045 23:26:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:12.045 [2024-09-30 23:26:51.883133] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:12.045 23:26:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:12.045 23:26:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:12.045 23:26:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:12.045 23:26:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:12.045 23:26:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:12.045 23:26:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:12.045 23:26:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:12.045 23:26:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:12.045 23:26:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:12.045 23:26:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:12.045 23:26:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:12.045 23:26:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:12.045 23:26:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:12.045 23:26:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:12.045 23:26:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:12.303 23:26:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:12.303 23:26:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:12.303 "name": "Existed_Raid", 00:09:12.303 "uuid": "d2fbd8c7-00fc-43ea-ab48-d4e0629944dd", 00:09:12.303 "strip_size_kb": 0, 00:09:12.303 "state": "configuring", 00:09:12.303 "raid_level": "raid1", 00:09:12.303 "superblock": true, 00:09:12.303 "num_base_bdevs": 3, 00:09:12.303 "num_base_bdevs_discovered": 2, 00:09:12.303 "num_base_bdevs_operational": 3, 00:09:12.303 "base_bdevs_list": [ 00:09:12.303 { 00:09:12.303 "name": "BaseBdev1", 00:09:12.303 "uuid": "0ecdbd38-3ec6-41d7-af89-ac2427932785", 00:09:12.303 "is_configured": true, 00:09:12.303 "data_offset": 2048, 00:09:12.303 "data_size": 63488 00:09:12.303 }, 00:09:12.303 { 00:09:12.303 "name": null, 00:09:12.303 "uuid": "6370f854-83bd-4ce6-9b3e-01a6498e7ad0", 00:09:12.303 "is_configured": false, 00:09:12.303 "data_offset": 0, 00:09:12.303 "data_size": 63488 00:09:12.303 }, 00:09:12.303 { 00:09:12.303 "name": "BaseBdev3", 00:09:12.303 "uuid": "cc25caa5-99a6-4be9-b9a9-516e008c2b95", 00:09:12.303 "is_configured": true, 00:09:12.303 "data_offset": 2048, 00:09:12.303 "data_size": 63488 00:09:12.303 } 00:09:12.303 ] 00:09:12.303 }' 00:09:12.303 23:26:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:12.303 23:26:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:12.562 23:26:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:12.562 23:26:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:12.562 23:26:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:12.562 23:26:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:12.562 23:26:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:12.562 23:26:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:09:12.562 23:26:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:12.562 23:26:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:12.562 23:26:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:12.562 [2024-09-30 23:26:52.362368] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:12.562 23:26:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:12.562 23:26:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:12.562 23:26:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:12.562 23:26:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:12.562 23:26:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:12.562 23:26:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:12.562 23:26:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:12.562 23:26:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:12.562 23:26:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:12.562 23:26:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:12.562 23:26:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:12.562 23:26:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:12.562 23:26:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:12.562 23:26:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:12.562 23:26:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:12.562 23:26:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:12.562 23:26:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:12.562 "name": "Existed_Raid", 00:09:12.562 "uuid": "d2fbd8c7-00fc-43ea-ab48-d4e0629944dd", 00:09:12.562 "strip_size_kb": 0, 00:09:12.562 "state": "configuring", 00:09:12.562 "raid_level": "raid1", 00:09:12.562 "superblock": true, 00:09:12.562 "num_base_bdevs": 3, 00:09:12.562 "num_base_bdevs_discovered": 1, 00:09:12.562 "num_base_bdevs_operational": 3, 00:09:12.562 "base_bdevs_list": [ 00:09:12.562 { 00:09:12.562 "name": null, 00:09:12.562 "uuid": "0ecdbd38-3ec6-41d7-af89-ac2427932785", 00:09:12.562 "is_configured": false, 00:09:12.562 "data_offset": 0, 00:09:12.562 "data_size": 63488 00:09:12.562 }, 00:09:12.562 { 00:09:12.562 "name": null, 00:09:12.562 "uuid": "6370f854-83bd-4ce6-9b3e-01a6498e7ad0", 00:09:12.562 "is_configured": false, 00:09:12.562 "data_offset": 0, 00:09:12.562 "data_size": 63488 00:09:12.562 }, 00:09:12.562 { 00:09:12.562 "name": "BaseBdev3", 00:09:12.562 "uuid": "cc25caa5-99a6-4be9-b9a9-516e008c2b95", 00:09:12.562 "is_configured": true, 00:09:12.562 "data_offset": 2048, 00:09:12.562 "data_size": 63488 00:09:12.562 } 00:09:12.562 ] 00:09:12.562 }' 00:09:12.562 23:26:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:12.562 23:26:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:13.130 23:26:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:13.130 23:26:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:13.130 23:26:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:13.130 23:26:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:13.130 23:26:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:13.130 23:26:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:09:13.130 23:26:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:09:13.130 23:26:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:13.130 23:26:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:13.130 [2024-09-30 23:26:52.808195] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:13.130 23:26:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:13.130 23:26:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:13.130 23:26:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:13.130 23:26:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:13.130 23:26:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:13.130 23:26:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:13.130 23:26:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:13.130 23:26:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:13.130 23:26:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:13.130 23:26:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:13.130 23:26:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:13.130 23:26:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:13.130 23:26:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:13.130 23:26:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:13.130 23:26:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:13.130 23:26:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:13.130 23:26:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:13.130 "name": "Existed_Raid", 00:09:13.130 "uuid": "d2fbd8c7-00fc-43ea-ab48-d4e0629944dd", 00:09:13.130 "strip_size_kb": 0, 00:09:13.130 "state": "configuring", 00:09:13.130 "raid_level": "raid1", 00:09:13.130 "superblock": true, 00:09:13.130 "num_base_bdevs": 3, 00:09:13.130 "num_base_bdevs_discovered": 2, 00:09:13.130 "num_base_bdevs_operational": 3, 00:09:13.130 "base_bdevs_list": [ 00:09:13.130 { 00:09:13.130 "name": null, 00:09:13.130 "uuid": "0ecdbd38-3ec6-41d7-af89-ac2427932785", 00:09:13.130 "is_configured": false, 00:09:13.130 "data_offset": 0, 00:09:13.130 "data_size": 63488 00:09:13.130 }, 00:09:13.130 { 00:09:13.130 "name": "BaseBdev2", 00:09:13.130 "uuid": "6370f854-83bd-4ce6-9b3e-01a6498e7ad0", 00:09:13.130 "is_configured": true, 00:09:13.130 "data_offset": 2048, 00:09:13.130 "data_size": 63488 00:09:13.130 }, 00:09:13.130 { 00:09:13.130 "name": "BaseBdev3", 00:09:13.130 "uuid": "cc25caa5-99a6-4be9-b9a9-516e008c2b95", 00:09:13.130 "is_configured": true, 00:09:13.130 "data_offset": 2048, 00:09:13.131 "data_size": 63488 00:09:13.131 } 00:09:13.131 ] 00:09:13.131 }' 00:09:13.131 23:26:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:13.131 23:26:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:13.699 23:26:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:13.699 23:26:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:13.699 23:26:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:13.699 23:26:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:13.699 23:26:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:13.699 23:26:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:09:13.699 23:26:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:09:13.699 23:26:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:13.699 23:26:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:13.699 23:26:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:13.699 23:26:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:13.699 23:26:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 0ecdbd38-3ec6-41d7-af89-ac2427932785 00:09:13.699 23:26:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:13.699 23:26:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:13.699 [2024-09-30 23:26:53.354153] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:09:13.699 [2024-09-30 23:26:53.354411] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:09:13.699 [2024-09-30 23:26:53.354450] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:13.699 [2024-09-30 23:26:53.354743] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:09:13.699 NewBaseBdev 00:09:13.699 [2024-09-30 23:26:53.354940] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:09:13.699 [2024-09-30 23:26:53.354958] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006d00 00:09:13.699 [2024-09-30 23:26:53.355060] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:13.699 23:26:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:13.699 23:26:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:09:13.699 23:26:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:09:13.699 23:26:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:13.699 23:26:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:13.699 23:26:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:13.699 23:26:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:13.699 23:26:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:13.699 23:26:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:13.699 23:26:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:13.699 23:26:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:13.699 23:26:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:09:13.699 23:26:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:13.699 23:26:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:13.699 [ 00:09:13.699 { 00:09:13.699 "name": "NewBaseBdev", 00:09:13.699 "aliases": [ 00:09:13.699 "0ecdbd38-3ec6-41d7-af89-ac2427932785" 00:09:13.699 ], 00:09:13.699 "product_name": "Malloc disk", 00:09:13.699 "block_size": 512, 00:09:13.699 "num_blocks": 65536, 00:09:13.699 "uuid": "0ecdbd38-3ec6-41d7-af89-ac2427932785", 00:09:13.699 "assigned_rate_limits": { 00:09:13.699 "rw_ios_per_sec": 0, 00:09:13.699 "rw_mbytes_per_sec": 0, 00:09:13.699 "r_mbytes_per_sec": 0, 00:09:13.699 "w_mbytes_per_sec": 0 00:09:13.699 }, 00:09:13.699 "claimed": true, 00:09:13.699 "claim_type": "exclusive_write", 00:09:13.699 "zoned": false, 00:09:13.699 "supported_io_types": { 00:09:13.699 "read": true, 00:09:13.699 "write": true, 00:09:13.699 "unmap": true, 00:09:13.699 "flush": true, 00:09:13.699 "reset": true, 00:09:13.699 "nvme_admin": false, 00:09:13.699 "nvme_io": false, 00:09:13.699 "nvme_io_md": false, 00:09:13.699 "write_zeroes": true, 00:09:13.699 "zcopy": true, 00:09:13.699 "get_zone_info": false, 00:09:13.699 "zone_management": false, 00:09:13.699 "zone_append": false, 00:09:13.699 "compare": false, 00:09:13.699 "compare_and_write": false, 00:09:13.699 "abort": true, 00:09:13.699 "seek_hole": false, 00:09:13.699 "seek_data": false, 00:09:13.699 "copy": true, 00:09:13.699 "nvme_iov_md": false 00:09:13.699 }, 00:09:13.699 "memory_domains": [ 00:09:13.699 { 00:09:13.699 "dma_device_id": "system", 00:09:13.699 "dma_device_type": 1 00:09:13.699 }, 00:09:13.699 { 00:09:13.699 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:13.699 "dma_device_type": 2 00:09:13.699 } 00:09:13.699 ], 00:09:13.699 "driver_specific": {} 00:09:13.699 } 00:09:13.699 ] 00:09:13.699 23:26:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:13.699 23:26:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:13.699 23:26:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:09:13.699 23:26:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:13.699 23:26:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:13.699 23:26:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:13.699 23:26:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:13.699 23:26:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:13.699 23:26:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:13.699 23:26:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:13.700 23:26:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:13.700 23:26:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:13.700 23:26:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:13.700 23:26:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:13.700 23:26:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:13.700 23:26:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:13.700 23:26:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:13.700 23:26:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:13.700 "name": "Existed_Raid", 00:09:13.700 "uuid": "d2fbd8c7-00fc-43ea-ab48-d4e0629944dd", 00:09:13.700 "strip_size_kb": 0, 00:09:13.700 "state": "online", 00:09:13.700 "raid_level": "raid1", 00:09:13.700 "superblock": true, 00:09:13.700 "num_base_bdevs": 3, 00:09:13.700 "num_base_bdevs_discovered": 3, 00:09:13.700 "num_base_bdevs_operational": 3, 00:09:13.700 "base_bdevs_list": [ 00:09:13.700 { 00:09:13.700 "name": "NewBaseBdev", 00:09:13.700 "uuid": "0ecdbd38-3ec6-41d7-af89-ac2427932785", 00:09:13.700 "is_configured": true, 00:09:13.700 "data_offset": 2048, 00:09:13.700 "data_size": 63488 00:09:13.700 }, 00:09:13.700 { 00:09:13.700 "name": "BaseBdev2", 00:09:13.700 "uuid": "6370f854-83bd-4ce6-9b3e-01a6498e7ad0", 00:09:13.700 "is_configured": true, 00:09:13.700 "data_offset": 2048, 00:09:13.700 "data_size": 63488 00:09:13.700 }, 00:09:13.700 { 00:09:13.700 "name": "BaseBdev3", 00:09:13.700 "uuid": "cc25caa5-99a6-4be9-b9a9-516e008c2b95", 00:09:13.700 "is_configured": true, 00:09:13.700 "data_offset": 2048, 00:09:13.700 "data_size": 63488 00:09:13.700 } 00:09:13.700 ] 00:09:13.700 }' 00:09:13.700 23:26:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:13.700 23:26:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:13.958 23:26:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:09:13.958 23:26:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:13.959 23:26:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:14.218 23:26:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:14.218 23:26:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:09:14.218 23:26:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:14.218 23:26:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:14.218 23:26:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:14.218 23:26:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:14.218 23:26:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:14.218 [2024-09-30 23:26:53.821679] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:14.218 23:26:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:14.218 23:26:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:14.218 "name": "Existed_Raid", 00:09:14.218 "aliases": [ 00:09:14.218 "d2fbd8c7-00fc-43ea-ab48-d4e0629944dd" 00:09:14.218 ], 00:09:14.218 "product_name": "Raid Volume", 00:09:14.218 "block_size": 512, 00:09:14.218 "num_blocks": 63488, 00:09:14.218 "uuid": "d2fbd8c7-00fc-43ea-ab48-d4e0629944dd", 00:09:14.218 "assigned_rate_limits": { 00:09:14.218 "rw_ios_per_sec": 0, 00:09:14.218 "rw_mbytes_per_sec": 0, 00:09:14.218 "r_mbytes_per_sec": 0, 00:09:14.218 "w_mbytes_per_sec": 0 00:09:14.218 }, 00:09:14.218 "claimed": false, 00:09:14.218 "zoned": false, 00:09:14.218 "supported_io_types": { 00:09:14.218 "read": true, 00:09:14.218 "write": true, 00:09:14.218 "unmap": false, 00:09:14.218 "flush": false, 00:09:14.218 "reset": true, 00:09:14.218 "nvme_admin": false, 00:09:14.218 "nvme_io": false, 00:09:14.218 "nvme_io_md": false, 00:09:14.218 "write_zeroes": true, 00:09:14.218 "zcopy": false, 00:09:14.218 "get_zone_info": false, 00:09:14.218 "zone_management": false, 00:09:14.218 "zone_append": false, 00:09:14.218 "compare": false, 00:09:14.218 "compare_and_write": false, 00:09:14.218 "abort": false, 00:09:14.218 "seek_hole": false, 00:09:14.218 "seek_data": false, 00:09:14.218 "copy": false, 00:09:14.218 "nvme_iov_md": false 00:09:14.218 }, 00:09:14.218 "memory_domains": [ 00:09:14.218 { 00:09:14.218 "dma_device_id": "system", 00:09:14.218 "dma_device_type": 1 00:09:14.218 }, 00:09:14.218 { 00:09:14.218 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:14.218 "dma_device_type": 2 00:09:14.218 }, 00:09:14.218 { 00:09:14.218 "dma_device_id": "system", 00:09:14.218 "dma_device_type": 1 00:09:14.218 }, 00:09:14.218 { 00:09:14.218 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:14.218 "dma_device_type": 2 00:09:14.218 }, 00:09:14.218 { 00:09:14.218 "dma_device_id": "system", 00:09:14.218 "dma_device_type": 1 00:09:14.218 }, 00:09:14.218 { 00:09:14.218 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:14.218 "dma_device_type": 2 00:09:14.218 } 00:09:14.218 ], 00:09:14.218 "driver_specific": { 00:09:14.218 "raid": { 00:09:14.218 "uuid": "d2fbd8c7-00fc-43ea-ab48-d4e0629944dd", 00:09:14.218 "strip_size_kb": 0, 00:09:14.218 "state": "online", 00:09:14.218 "raid_level": "raid1", 00:09:14.218 "superblock": true, 00:09:14.218 "num_base_bdevs": 3, 00:09:14.218 "num_base_bdevs_discovered": 3, 00:09:14.218 "num_base_bdevs_operational": 3, 00:09:14.218 "base_bdevs_list": [ 00:09:14.218 { 00:09:14.218 "name": "NewBaseBdev", 00:09:14.218 "uuid": "0ecdbd38-3ec6-41d7-af89-ac2427932785", 00:09:14.218 "is_configured": true, 00:09:14.218 "data_offset": 2048, 00:09:14.218 "data_size": 63488 00:09:14.218 }, 00:09:14.218 { 00:09:14.218 "name": "BaseBdev2", 00:09:14.218 "uuid": "6370f854-83bd-4ce6-9b3e-01a6498e7ad0", 00:09:14.218 "is_configured": true, 00:09:14.218 "data_offset": 2048, 00:09:14.218 "data_size": 63488 00:09:14.218 }, 00:09:14.218 { 00:09:14.218 "name": "BaseBdev3", 00:09:14.218 "uuid": "cc25caa5-99a6-4be9-b9a9-516e008c2b95", 00:09:14.218 "is_configured": true, 00:09:14.218 "data_offset": 2048, 00:09:14.218 "data_size": 63488 00:09:14.218 } 00:09:14.218 ] 00:09:14.218 } 00:09:14.218 } 00:09:14.218 }' 00:09:14.218 23:26:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:14.218 23:26:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:09:14.218 BaseBdev2 00:09:14.218 BaseBdev3' 00:09:14.218 23:26:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:14.218 23:26:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:14.218 23:26:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:14.218 23:26:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:14.218 23:26:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:09:14.218 23:26:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:14.218 23:26:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:14.218 23:26:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:14.218 23:26:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:14.218 23:26:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:14.218 23:26:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:14.218 23:26:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:14.218 23:26:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:14.218 23:26:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:14.218 23:26:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:14.218 23:26:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:14.218 23:26:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:14.218 23:26:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:14.218 23:26:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:14.218 23:26:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:14.218 23:26:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:14.218 23:26:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:14.218 23:26:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:14.478 23:26:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:14.478 23:26:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:14.478 23:26:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:14.478 23:26:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:14.478 23:26:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:14.478 23:26:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:14.478 [2024-09-30 23:26:54.080953] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:14.478 [2024-09-30 23:26:54.081035] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:14.478 [2024-09-30 23:26:54.081112] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:14.478 [2024-09-30 23:26:54.081370] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:14.478 [2024-09-30 23:26:54.081380] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name Existed_Raid, state offline 00:09:14.478 23:26:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:14.478 23:26:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 79107 00:09:14.478 23:26:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 79107 ']' 00:09:14.478 23:26:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 79107 00:09:14.478 23:26:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:09:14.478 23:26:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:14.478 23:26:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 79107 00:09:14.478 killing process with pid 79107 00:09:14.478 23:26:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:14.478 23:26:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:14.478 23:26:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 79107' 00:09:14.479 23:26:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 79107 00:09:14.479 [2024-09-30 23:26:54.125652] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:14.479 23:26:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 79107 00:09:14.479 [2024-09-30 23:26:54.156645] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:14.738 23:26:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:09:14.738 00:09:14.738 real 0m8.813s 00:09:14.738 user 0m14.987s 00:09:14.738 sys 0m1.897s 00:09:14.738 23:26:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:14.738 23:26:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:14.738 ************************************ 00:09:14.738 END TEST raid_state_function_test_sb 00:09:14.738 ************************************ 00:09:14.738 23:26:54 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 3 00:09:14.738 23:26:54 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:09:14.738 23:26:54 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:14.738 23:26:54 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:14.738 ************************************ 00:09:14.738 START TEST raid_superblock_test 00:09:14.738 ************************************ 00:09:14.738 23:26:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test raid1 3 00:09:14.738 23:26:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:09:14.738 23:26:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:09:14.738 23:26:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:09:14.738 23:26:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:09:14.738 23:26:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:09:14.738 23:26:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:09:14.738 23:26:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:09:14.738 23:26:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:09:14.738 23:26:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:09:14.738 23:26:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:09:14.738 23:26:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:09:14.738 23:26:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:09:14.738 23:26:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:09:14.738 23:26:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:09:14.738 23:26:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:09:14.738 23:26:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=79705 00:09:14.738 23:26:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:09:14.738 23:26:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 79705 00:09:14.738 23:26:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 79705 ']' 00:09:14.738 23:26:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:14.738 23:26:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:14.738 23:26:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:14.738 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:14.738 23:26:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:14.738 23:26:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.738 [2024-09-30 23:26:54.572043] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:09:14.738 [2024-09-30 23:26:54.572254] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79705 ] 00:09:14.997 [2024-09-30 23:26:54.733204] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:14.997 [2024-09-30 23:26:54.778094] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:14.997 [2024-09-30 23:26:54.820429] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:14.997 [2024-09-30 23:26:54.820549] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:15.567 23:26:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:15.567 23:26:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:09:15.567 23:26:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:09:15.567 23:26:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:15.567 23:26:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:09:15.567 23:26:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:09:15.567 23:26:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:09:15.567 23:26:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:15.567 23:26:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:15.567 23:26:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:15.567 23:26:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:09:15.567 23:26:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:15.567 23:26:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.567 malloc1 00:09:15.567 23:26:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:15.567 23:26:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:15.567 23:26:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:15.567 23:26:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.826 [2024-09-30 23:26:55.422699] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:15.826 [2024-09-30 23:26:55.422876] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:15.826 [2024-09-30 23:26:55.422924] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:09:15.826 [2024-09-30 23:26:55.422985] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:15.826 [2024-09-30 23:26:55.425083] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:15.826 [2024-09-30 23:26:55.425185] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:15.826 pt1 00:09:15.826 23:26:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:15.826 23:26:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:15.826 23:26:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:15.826 23:26:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:09:15.826 23:26:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:09:15.826 23:26:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:09:15.826 23:26:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:15.826 23:26:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:15.826 23:26:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:15.826 23:26:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:09:15.826 23:26:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:15.826 23:26:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.826 malloc2 00:09:15.826 23:26:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:15.826 23:26:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:15.826 23:26:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:15.826 23:26:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.826 [2024-09-30 23:26:55.475267] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:15.826 [2024-09-30 23:26:55.475424] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:15.826 [2024-09-30 23:26:55.475476] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:09:15.826 [2024-09-30 23:26:55.475513] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:15.826 [2024-09-30 23:26:55.480168] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:15.826 [2024-09-30 23:26:55.480242] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:15.826 pt2 00:09:15.826 23:26:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:15.826 23:26:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:15.826 23:26:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:15.826 23:26:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:09:15.826 23:26:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:09:15.826 23:26:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:09:15.826 23:26:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:15.826 23:26:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:15.826 23:26:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:15.826 23:26:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:09:15.826 23:26:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:15.826 23:26:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.826 malloc3 00:09:15.826 23:26:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:15.826 23:26:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:15.826 23:26:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:15.826 23:26:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.826 [2024-09-30 23:26:55.506189] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:15.826 [2024-09-30 23:26:55.506315] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:15.826 [2024-09-30 23:26:55.506350] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:09:15.826 [2024-09-30 23:26:55.506379] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:15.826 [2024-09-30 23:26:55.508429] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:15.826 [2024-09-30 23:26:55.508508] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:15.826 pt3 00:09:15.826 23:26:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:15.826 23:26:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:15.826 23:26:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:15.826 23:26:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:09:15.826 23:26:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:15.826 23:26:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.826 [2024-09-30 23:26:55.518207] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:15.826 [2024-09-30 23:26:55.520092] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:15.826 [2024-09-30 23:26:55.520195] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:15.826 [2024-09-30 23:26:55.520371] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:09:15.826 [2024-09-30 23:26:55.520416] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:15.826 [2024-09-30 23:26:55.520689] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:09:15.826 [2024-09-30 23:26:55.520871] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:09:15.826 [2024-09-30 23:26:55.520919] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:09:15.826 [2024-09-30 23:26:55.521060] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:15.826 23:26:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:15.826 23:26:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:09:15.826 23:26:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:15.826 23:26:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:15.826 23:26:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:15.826 23:26:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:15.826 23:26:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:15.826 23:26:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:15.826 23:26:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:15.826 23:26:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:15.826 23:26:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:15.826 23:26:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:15.826 23:26:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:15.826 23:26:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:15.826 23:26:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.826 23:26:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:15.826 23:26:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:15.826 "name": "raid_bdev1", 00:09:15.826 "uuid": "b4e56ef7-358e-4fc6-935e-e450e5ddaae4", 00:09:15.826 "strip_size_kb": 0, 00:09:15.826 "state": "online", 00:09:15.826 "raid_level": "raid1", 00:09:15.826 "superblock": true, 00:09:15.826 "num_base_bdevs": 3, 00:09:15.826 "num_base_bdevs_discovered": 3, 00:09:15.826 "num_base_bdevs_operational": 3, 00:09:15.826 "base_bdevs_list": [ 00:09:15.826 { 00:09:15.826 "name": "pt1", 00:09:15.826 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:15.826 "is_configured": true, 00:09:15.826 "data_offset": 2048, 00:09:15.826 "data_size": 63488 00:09:15.826 }, 00:09:15.826 { 00:09:15.826 "name": "pt2", 00:09:15.826 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:15.826 "is_configured": true, 00:09:15.826 "data_offset": 2048, 00:09:15.826 "data_size": 63488 00:09:15.826 }, 00:09:15.826 { 00:09:15.826 "name": "pt3", 00:09:15.826 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:15.826 "is_configured": true, 00:09:15.826 "data_offset": 2048, 00:09:15.826 "data_size": 63488 00:09:15.826 } 00:09:15.826 ] 00:09:15.826 }' 00:09:15.826 23:26:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:15.826 23:26:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.393 23:26:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:09:16.393 23:26:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:09:16.393 23:26:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:16.393 23:26:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:16.393 23:26:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:16.393 23:26:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:16.393 23:26:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:16.393 23:26:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:16.393 23:26:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:16.393 23:26:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.393 [2024-09-30 23:26:55.985695] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:16.393 23:26:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:16.393 23:26:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:16.393 "name": "raid_bdev1", 00:09:16.393 "aliases": [ 00:09:16.393 "b4e56ef7-358e-4fc6-935e-e450e5ddaae4" 00:09:16.393 ], 00:09:16.393 "product_name": "Raid Volume", 00:09:16.393 "block_size": 512, 00:09:16.393 "num_blocks": 63488, 00:09:16.393 "uuid": "b4e56ef7-358e-4fc6-935e-e450e5ddaae4", 00:09:16.393 "assigned_rate_limits": { 00:09:16.393 "rw_ios_per_sec": 0, 00:09:16.393 "rw_mbytes_per_sec": 0, 00:09:16.393 "r_mbytes_per_sec": 0, 00:09:16.393 "w_mbytes_per_sec": 0 00:09:16.393 }, 00:09:16.393 "claimed": false, 00:09:16.393 "zoned": false, 00:09:16.393 "supported_io_types": { 00:09:16.393 "read": true, 00:09:16.393 "write": true, 00:09:16.393 "unmap": false, 00:09:16.393 "flush": false, 00:09:16.393 "reset": true, 00:09:16.393 "nvme_admin": false, 00:09:16.393 "nvme_io": false, 00:09:16.393 "nvme_io_md": false, 00:09:16.393 "write_zeroes": true, 00:09:16.393 "zcopy": false, 00:09:16.393 "get_zone_info": false, 00:09:16.393 "zone_management": false, 00:09:16.393 "zone_append": false, 00:09:16.393 "compare": false, 00:09:16.393 "compare_and_write": false, 00:09:16.393 "abort": false, 00:09:16.393 "seek_hole": false, 00:09:16.393 "seek_data": false, 00:09:16.393 "copy": false, 00:09:16.393 "nvme_iov_md": false 00:09:16.393 }, 00:09:16.393 "memory_domains": [ 00:09:16.393 { 00:09:16.393 "dma_device_id": "system", 00:09:16.393 "dma_device_type": 1 00:09:16.393 }, 00:09:16.393 { 00:09:16.393 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:16.393 "dma_device_type": 2 00:09:16.393 }, 00:09:16.393 { 00:09:16.393 "dma_device_id": "system", 00:09:16.393 "dma_device_type": 1 00:09:16.393 }, 00:09:16.393 { 00:09:16.393 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:16.393 "dma_device_type": 2 00:09:16.393 }, 00:09:16.393 { 00:09:16.393 "dma_device_id": "system", 00:09:16.393 "dma_device_type": 1 00:09:16.393 }, 00:09:16.393 { 00:09:16.393 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:16.393 "dma_device_type": 2 00:09:16.393 } 00:09:16.393 ], 00:09:16.393 "driver_specific": { 00:09:16.393 "raid": { 00:09:16.393 "uuid": "b4e56ef7-358e-4fc6-935e-e450e5ddaae4", 00:09:16.393 "strip_size_kb": 0, 00:09:16.393 "state": "online", 00:09:16.393 "raid_level": "raid1", 00:09:16.393 "superblock": true, 00:09:16.393 "num_base_bdevs": 3, 00:09:16.393 "num_base_bdevs_discovered": 3, 00:09:16.393 "num_base_bdevs_operational": 3, 00:09:16.393 "base_bdevs_list": [ 00:09:16.393 { 00:09:16.393 "name": "pt1", 00:09:16.393 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:16.393 "is_configured": true, 00:09:16.393 "data_offset": 2048, 00:09:16.393 "data_size": 63488 00:09:16.393 }, 00:09:16.393 { 00:09:16.393 "name": "pt2", 00:09:16.393 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:16.393 "is_configured": true, 00:09:16.393 "data_offset": 2048, 00:09:16.393 "data_size": 63488 00:09:16.393 }, 00:09:16.393 { 00:09:16.393 "name": "pt3", 00:09:16.393 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:16.393 "is_configured": true, 00:09:16.393 "data_offset": 2048, 00:09:16.393 "data_size": 63488 00:09:16.393 } 00:09:16.393 ] 00:09:16.393 } 00:09:16.393 } 00:09:16.393 }' 00:09:16.393 23:26:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:16.393 23:26:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:09:16.393 pt2 00:09:16.393 pt3' 00:09:16.393 23:26:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:16.393 23:26:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:16.393 23:26:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:16.393 23:26:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:16.393 23:26:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:09:16.393 23:26:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:16.393 23:26:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.393 23:26:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:16.393 23:26:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:16.393 23:26:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:16.393 23:26:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:16.393 23:26:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:16.393 23:26:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:09:16.393 23:26:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:16.393 23:26:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.393 23:26:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:16.393 23:26:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:16.393 23:26:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:16.393 23:26:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:16.393 23:26:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:09:16.393 23:26:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:16.393 23:26:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:16.393 23:26:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.393 23:26:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:16.652 23:26:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:16.652 23:26:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:16.652 23:26:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:16.652 23:26:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:09:16.652 23:26:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:16.652 23:26:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.652 [2024-09-30 23:26:56.277152] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:16.652 23:26:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:16.652 23:26:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=b4e56ef7-358e-4fc6-935e-e450e5ddaae4 00:09:16.652 23:26:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z b4e56ef7-358e-4fc6-935e-e450e5ddaae4 ']' 00:09:16.652 23:26:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:16.652 23:26:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:16.652 23:26:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.652 [2024-09-30 23:26:56.324811] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:16.652 [2024-09-30 23:26:56.324941] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:16.652 [2024-09-30 23:26:56.325043] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:16.652 [2024-09-30 23:26:56.325141] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:16.652 [2024-09-30 23:26:56.325195] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:09:16.652 23:26:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:16.652 23:26:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:16.652 23:26:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:09:16.652 23:26:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:16.652 23:26:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.652 23:26:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:16.652 23:26:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:09:16.652 23:26:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:09:16.652 23:26:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:16.652 23:26:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:09:16.652 23:26:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:16.652 23:26:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.652 23:26:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:16.652 23:26:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:16.652 23:26:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:09:16.652 23:26:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:16.652 23:26:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.652 23:26:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:16.652 23:26:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:16.652 23:26:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:09:16.652 23:26:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:16.652 23:26:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.652 23:26:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:16.652 23:26:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:09:16.652 23:26:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:09:16.652 23:26:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:16.652 23:26:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.652 23:26:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:16.652 23:26:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:09:16.652 23:26:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:09:16.652 23:26:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:09:16.652 23:26:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:09:16.652 23:26:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:09:16.652 23:26:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:16.652 23:26:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:09:16.652 23:26:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:16.653 23:26:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:09:16.653 23:26:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:16.653 23:26:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.653 [2024-09-30 23:26:56.460592] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:09:16.653 [2024-09-30 23:26:56.462453] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:09:16.653 [2024-09-30 23:26:56.462539] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:09:16.653 [2024-09-30 23:26:56.462607] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:09:16.653 [2024-09-30 23:26:56.462679] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:09:16.653 [2024-09-30 23:26:56.462700] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:09:16.653 [2024-09-30 23:26:56.462712] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:16.653 [2024-09-30 23:26:56.462722] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state configuring 00:09:16.653 request: 00:09:16.653 { 00:09:16.653 "name": "raid_bdev1", 00:09:16.653 "raid_level": "raid1", 00:09:16.653 "base_bdevs": [ 00:09:16.653 "malloc1", 00:09:16.653 "malloc2", 00:09:16.653 "malloc3" 00:09:16.653 ], 00:09:16.653 "superblock": false, 00:09:16.653 "method": "bdev_raid_create", 00:09:16.653 "req_id": 1 00:09:16.653 } 00:09:16.653 Got JSON-RPC error response 00:09:16.653 response: 00:09:16.653 { 00:09:16.653 "code": -17, 00:09:16.653 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:09:16.653 } 00:09:16.653 23:26:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:09:16.653 23:26:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:09:16.653 23:26:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:16.653 23:26:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:16.653 23:26:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:16.653 23:26:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:16.653 23:26:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:16.653 23:26:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.653 23:26:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:09:16.653 23:26:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:16.910 23:26:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:09:16.910 23:26:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:09:16.910 23:26:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:16.910 23:26:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:16.910 23:26:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.910 [2024-09-30 23:26:56.528453] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:16.910 [2024-09-30 23:26:56.528568] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:16.910 [2024-09-30 23:26:56.528603] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:09:16.910 [2024-09-30 23:26:56.528634] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:16.910 [2024-09-30 23:26:56.530648] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:16.910 [2024-09-30 23:26:56.530719] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:16.910 [2024-09-30 23:26:56.530798] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:09:16.910 [2024-09-30 23:26:56.530870] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:16.910 pt1 00:09:16.910 23:26:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:16.910 23:26:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:09:16.910 23:26:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:16.910 23:26:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:16.910 23:26:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:16.910 23:26:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:16.910 23:26:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:16.910 23:26:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:16.910 23:26:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:16.910 23:26:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:16.910 23:26:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:16.910 23:26:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:16.910 23:26:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:16.910 23:26:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:16.910 23:26:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.910 23:26:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:16.910 23:26:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:16.910 "name": "raid_bdev1", 00:09:16.910 "uuid": "b4e56ef7-358e-4fc6-935e-e450e5ddaae4", 00:09:16.910 "strip_size_kb": 0, 00:09:16.910 "state": "configuring", 00:09:16.910 "raid_level": "raid1", 00:09:16.910 "superblock": true, 00:09:16.910 "num_base_bdevs": 3, 00:09:16.910 "num_base_bdevs_discovered": 1, 00:09:16.910 "num_base_bdevs_operational": 3, 00:09:16.910 "base_bdevs_list": [ 00:09:16.911 { 00:09:16.911 "name": "pt1", 00:09:16.911 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:16.911 "is_configured": true, 00:09:16.911 "data_offset": 2048, 00:09:16.911 "data_size": 63488 00:09:16.911 }, 00:09:16.911 { 00:09:16.911 "name": null, 00:09:16.911 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:16.911 "is_configured": false, 00:09:16.911 "data_offset": 2048, 00:09:16.911 "data_size": 63488 00:09:16.911 }, 00:09:16.911 { 00:09:16.911 "name": null, 00:09:16.911 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:16.911 "is_configured": false, 00:09:16.911 "data_offset": 2048, 00:09:16.911 "data_size": 63488 00:09:16.911 } 00:09:16.911 ] 00:09:16.911 }' 00:09:16.911 23:26:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:16.911 23:26:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.169 23:26:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:09:17.169 23:26:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:17.169 23:26:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:17.169 23:26:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.169 [2024-09-30 23:26:56.955757] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:17.169 [2024-09-30 23:26:56.955876] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:17.169 [2024-09-30 23:26:56.955913] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:09:17.169 [2024-09-30 23:26:56.955945] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:17.169 [2024-09-30 23:26:56.956342] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:17.169 [2024-09-30 23:26:56.956409] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:17.169 [2024-09-30 23:26:56.956506] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:17.169 [2024-09-30 23:26:56.956556] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:17.169 pt2 00:09:17.169 23:26:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:17.169 23:26:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:09:17.169 23:26:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:17.169 23:26:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.169 [2024-09-30 23:26:56.963751] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:09:17.169 23:26:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:17.169 23:26:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:09:17.169 23:26:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:17.169 23:26:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:17.169 23:26:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:17.169 23:26:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:17.169 23:26:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:17.169 23:26:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:17.169 23:26:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:17.169 23:26:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:17.169 23:26:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:17.169 23:26:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:17.169 23:26:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:17.169 23:26:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:17.169 23:26:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.169 23:26:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:17.169 23:26:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:17.169 "name": "raid_bdev1", 00:09:17.169 "uuid": "b4e56ef7-358e-4fc6-935e-e450e5ddaae4", 00:09:17.169 "strip_size_kb": 0, 00:09:17.169 "state": "configuring", 00:09:17.169 "raid_level": "raid1", 00:09:17.169 "superblock": true, 00:09:17.169 "num_base_bdevs": 3, 00:09:17.169 "num_base_bdevs_discovered": 1, 00:09:17.169 "num_base_bdevs_operational": 3, 00:09:17.169 "base_bdevs_list": [ 00:09:17.169 { 00:09:17.169 "name": "pt1", 00:09:17.169 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:17.169 "is_configured": true, 00:09:17.169 "data_offset": 2048, 00:09:17.169 "data_size": 63488 00:09:17.169 }, 00:09:17.169 { 00:09:17.169 "name": null, 00:09:17.169 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:17.169 "is_configured": false, 00:09:17.169 "data_offset": 0, 00:09:17.169 "data_size": 63488 00:09:17.169 }, 00:09:17.169 { 00:09:17.169 "name": null, 00:09:17.169 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:17.169 "is_configured": false, 00:09:17.169 "data_offset": 2048, 00:09:17.169 "data_size": 63488 00:09:17.169 } 00:09:17.169 ] 00:09:17.169 }' 00:09:17.169 23:26:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:17.169 23:26:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.737 23:26:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:09:17.737 23:26:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:17.737 23:26:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:17.737 23:26:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:17.737 23:26:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.737 [2024-09-30 23:26:57.423058] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:17.737 [2024-09-30 23:26:57.423175] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:17.737 [2024-09-30 23:26:57.423211] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:09:17.737 [2024-09-30 23:26:57.423238] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:17.737 [2024-09-30 23:26:57.423634] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:17.737 [2024-09-30 23:26:57.423696] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:17.737 [2024-09-30 23:26:57.423791] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:17.737 [2024-09-30 23:26:57.423846] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:17.737 pt2 00:09:17.737 23:26:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:17.737 23:26:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:09:17.737 23:26:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:17.737 23:26:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:17.737 23:26:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:17.737 23:26:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.737 [2024-09-30 23:26:57.435034] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:17.737 [2024-09-30 23:26:57.435119] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:17.737 [2024-09-30 23:26:57.435151] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:09:17.737 [2024-09-30 23:26:57.435176] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:17.737 [2024-09-30 23:26:57.435525] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:17.737 [2024-09-30 23:26:57.435576] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:17.737 [2024-09-30 23:26:57.435658] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:09:17.737 [2024-09-30 23:26:57.435701] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:17.737 [2024-09-30 23:26:57.435815] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:09:17.737 [2024-09-30 23:26:57.435851] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:17.737 [2024-09-30 23:26:57.436085] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:09:17.737 [2024-09-30 23:26:57.436205] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:09:17.737 [2024-09-30 23:26:57.436217] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006980 00:09:17.737 [2024-09-30 23:26:57.436315] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:17.737 pt3 00:09:17.737 23:26:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:17.737 23:26:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:09:17.737 23:26:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:17.737 23:26:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:09:17.737 23:26:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:17.737 23:26:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:17.737 23:26:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:17.737 23:26:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:17.737 23:26:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:17.737 23:26:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:17.737 23:26:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:17.737 23:26:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:17.737 23:26:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:17.737 23:26:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:17.737 23:26:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:17.737 23:26:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:17.737 23:26:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.737 23:26:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:17.737 23:26:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:17.737 "name": "raid_bdev1", 00:09:17.737 "uuid": "b4e56ef7-358e-4fc6-935e-e450e5ddaae4", 00:09:17.737 "strip_size_kb": 0, 00:09:17.737 "state": "online", 00:09:17.737 "raid_level": "raid1", 00:09:17.737 "superblock": true, 00:09:17.737 "num_base_bdevs": 3, 00:09:17.737 "num_base_bdevs_discovered": 3, 00:09:17.737 "num_base_bdevs_operational": 3, 00:09:17.737 "base_bdevs_list": [ 00:09:17.737 { 00:09:17.737 "name": "pt1", 00:09:17.737 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:17.737 "is_configured": true, 00:09:17.737 "data_offset": 2048, 00:09:17.737 "data_size": 63488 00:09:17.737 }, 00:09:17.737 { 00:09:17.737 "name": "pt2", 00:09:17.737 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:17.737 "is_configured": true, 00:09:17.737 "data_offset": 2048, 00:09:17.737 "data_size": 63488 00:09:17.737 }, 00:09:17.737 { 00:09:17.737 "name": "pt3", 00:09:17.737 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:17.737 "is_configured": true, 00:09:17.737 "data_offset": 2048, 00:09:17.737 "data_size": 63488 00:09:17.737 } 00:09:17.737 ] 00:09:17.737 }' 00:09:17.737 23:26:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:17.737 23:26:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.306 23:26:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:09:18.306 23:26:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:09:18.306 23:26:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:18.306 23:26:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:18.306 23:26:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:18.306 23:26:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:18.306 23:26:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:18.306 23:26:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:18.306 23:26:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:18.306 23:26:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.306 [2024-09-30 23:26:57.878802] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:18.306 23:26:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:18.306 23:26:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:18.306 "name": "raid_bdev1", 00:09:18.306 "aliases": [ 00:09:18.306 "b4e56ef7-358e-4fc6-935e-e450e5ddaae4" 00:09:18.306 ], 00:09:18.306 "product_name": "Raid Volume", 00:09:18.306 "block_size": 512, 00:09:18.306 "num_blocks": 63488, 00:09:18.306 "uuid": "b4e56ef7-358e-4fc6-935e-e450e5ddaae4", 00:09:18.306 "assigned_rate_limits": { 00:09:18.306 "rw_ios_per_sec": 0, 00:09:18.306 "rw_mbytes_per_sec": 0, 00:09:18.306 "r_mbytes_per_sec": 0, 00:09:18.306 "w_mbytes_per_sec": 0 00:09:18.306 }, 00:09:18.306 "claimed": false, 00:09:18.306 "zoned": false, 00:09:18.306 "supported_io_types": { 00:09:18.306 "read": true, 00:09:18.306 "write": true, 00:09:18.306 "unmap": false, 00:09:18.306 "flush": false, 00:09:18.306 "reset": true, 00:09:18.306 "nvme_admin": false, 00:09:18.306 "nvme_io": false, 00:09:18.306 "nvme_io_md": false, 00:09:18.306 "write_zeroes": true, 00:09:18.306 "zcopy": false, 00:09:18.306 "get_zone_info": false, 00:09:18.306 "zone_management": false, 00:09:18.306 "zone_append": false, 00:09:18.306 "compare": false, 00:09:18.306 "compare_and_write": false, 00:09:18.306 "abort": false, 00:09:18.306 "seek_hole": false, 00:09:18.306 "seek_data": false, 00:09:18.306 "copy": false, 00:09:18.306 "nvme_iov_md": false 00:09:18.306 }, 00:09:18.306 "memory_domains": [ 00:09:18.306 { 00:09:18.306 "dma_device_id": "system", 00:09:18.306 "dma_device_type": 1 00:09:18.306 }, 00:09:18.306 { 00:09:18.306 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:18.306 "dma_device_type": 2 00:09:18.306 }, 00:09:18.306 { 00:09:18.306 "dma_device_id": "system", 00:09:18.306 "dma_device_type": 1 00:09:18.306 }, 00:09:18.306 { 00:09:18.306 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:18.306 "dma_device_type": 2 00:09:18.306 }, 00:09:18.306 { 00:09:18.306 "dma_device_id": "system", 00:09:18.306 "dma_device_type": 1 00:09:18.306 }, 00:09:18.306 { 00:09:18.306 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:18.306 "dma_device_type": 2 00:09:18.306 } 00:09:18.306 ], 00:09:18.306 "driver_specific": { 00:09:18.306 "raid": { 00:09:18.306 "uuid": "b4e56ef7-358e-4fc6-935e-e450e5ddaae4", 00:09:18.306 "strip_size_kb": 0, 00:09:18.306 "state": "online", 00:09:18.306 "raid_level": "raid1", 00:09:18.306 "superblock": true, 00:09:18.306 "num_base_bdevs": 3, 00:09:18.306 "num_base_bdevs_discovered": 3, 00:09:18.306 "num_base_bdevs_operational": 3, 00:09:18.306 "base_bdevs_list": [ 00:09:18.306 { 00:09:18.306 "name": "pt1", 00:09:18.306 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:18.306 "is_configured": true, 00:09:18.306 "data_offset": 2048, 00:09:18.306 "data_size": 63488 00:09:18.306 }, 00:09:18.306 { 00:09:18.306 "name": "pt2", 00:09:18.306 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:18.306 "is_configured": true, 00:09:18.306 "data_offset": 2048, 00:09:18.306 "data_size": 63488 00:09:18.306 }, 00:09:18.306 { 00:09:18.306 "name": "pt3", 00:09:18.306 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:18.306 "is_configured": true, 00:09:18.306 "data_offset": 2048, 00:09:18.306 "data_size": 63488 00:09:18.306 } 00:09:18.306 ] 00:09:18.306 } 00:09:18.306 } 00:09:18.306 }' 00:09:18.306 23:26:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:18.307 23:26:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:09:18.307 pt2 00:09:18.307 pt3' 00:09:18.307 23:26:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:18.307 23:26:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:18.307 23:26:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:18.307 23:26:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:09:18.307 23:26:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:18.307 23:26:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.307 23:26:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:18.307 23:26:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:18.307 23:26:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:18.307 23:26:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:18.307 23:26:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:18.307 23:26:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:09:18.307 23:26:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:18.307 23:26:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:18.307 23:26:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.307 23:26:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:18.307 23:26:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:18.307 23:26:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:18.307 23:26:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:18.307 23:26:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:18.307 23:26:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:09:18.307 23:26:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:18.307 23:26:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.307 23:26:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:18.307 23:26:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:18.307 23:26:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:18.307 23:26:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:18.307 23:26:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:09:18.307 23:26:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:18.307 23:26:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.307 [2024-09-30 23:26:58.142331] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:18.566 23:26:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:18.566 23:26:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' b4e56ef7-358e-4fc6-935e-e450e5ddaae4 '!=' b4e56ef7-358e-4fc6-935e-e450e5ddaae4 ']' 00:09:18.566 23:26:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:09:18.566 23:26:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:18.566 23:26:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:09:18.566 23:26:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:09:18.566 23:26:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:18.566 23:26:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.566 [2024-09-30 23:26:58.186018] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:09:18.566 23:26:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:18.566 23:26:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:09:18.566 23:26:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:18.566 23:26:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:18.566 23:26:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:18.566 23:26:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:18.566 23:26:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:18.566 23:26:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:18.566 23:26:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:18.566 23:26:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:18.566 23:26:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:18.566 23:26:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:18.566 23:26:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:18.566 23:26:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:18.566 23:26:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.566 23:26:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:18.566 23:26:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:18.566 "name": "raid_bdev1", 00:09:18.566 "uuid": "b4e56ef7-358e-4fc6-935e-e450e5ddaae4", 00:09:18.566 "strip_size_kb": 0, 00:09:18.566 "state": "online", 00:09:18.566 "raid_level": "raid1", 00:09:18.566 "superblock": true, 00:09:18.566 "num_base_bdevs": 3, 00:09:18.566 "num_base_bdevs_discovered": 2, 00:09:18.566 "num_base_bdevs_operational": 2, 00:09:18.567 "base_bdevs_list": [ 00:09:18.567 { 00:09:18.567 "name": null, 00:09:18.567 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:18.567 "is_configured": false, 00:09:18.567 "data_offset": 0, 00:09:18.567 "data_size": 63488 00:09:18.567 }, 00:09:18.567 { 00:09:18.567 "name": "pt2", 00:09:18.567 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:18.567 "is_configured": true, 00:09:18.567 "data_offset": 2048, 00:09:18.567 "data_size": 63488 00:09:18.567 }, 00:09:18.567 { 00:09:18.567 "name": "pt3", 00:09:18.567 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:18.567 "is_configured": true, 00:09:18.567 "data_offset": 2048, 00:09:18.567 "data_size": 63488 00:09:18.567 } 00:09:18.567 ] 00:09:18.567 }' 00:09:18.567 23:26:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:18.567 23:26:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.825 23:26:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:18.825 23:26:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:18.825 23:26:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.825 [2024-09-30 23:26:58.657199] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:18.825 [2024-09-30 23:26:58.657233] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:18.825 [2024-09-30 23:26:58.657300] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:18.825 [2024-09-30 23:26:58.657357] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:18.825 [2024-09-30 23:26:58.657366] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name raid_bdev1, state offline 00:09:18.825 23:26:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:18.825 23:26:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:09:18.825 23:26:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:18.825 23:26:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:18.825 23:26:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.825 23:26:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:19.084 23:26:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:09:19.084 23:26:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:09:19.084 23:26:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:09:19.084 23:26:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:09:19.084 23:26:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:09:19.084 23:26:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:19.084 23:26:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.084 23:26:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:19.084 23:26:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:09:19.084 23:26:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:09:19.084 23:26:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:09:19.084 23:26:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:19.084 23:26:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.084 23:26:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:19.084 23:26:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:09:19.084 23:26:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:09:19.084 23:26:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:09:19.084 23:26:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:09:19.084 23:26:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:19.084 23:26:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:19.084 23:26:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.084 [2024-09-30 23:26:58.741031] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:19.084 [2024-09-30 23:26:58.741084] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:19.084 [2024-09-30 23:26:58.741103] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:09:19.084 [2024-09-30 23:26:58.741112] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:19.084 [2024-09-30 23:26:58.743221] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:19.084 [2024-09-30 23:26:58.743305] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:19.084 [2024-09-30 23:26:58.743379] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:19.084 [2024-09-30 23:26:58.743411] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:19.084 pt2 00:09:19.084 23:26:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:19.084 23:26:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:09:19.084 23:26:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:19.084 23:26:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:19.084 23:26:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:19.084 23:26:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:19.084 23:26:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:19.084 23:26:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:19.084 23:26:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:19.084 23:26:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:19.084 23:26:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:19.084 23:26:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:19.084 23:26:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:19.084 23:26:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:19.084 23:26:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.084 23:26:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:19.084 23:26:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:19.084 "name": "raid_bdev1", 00:09:19.084 "uuid": "b4e56ef7-358e-4fc6-935e-e450e5ddaae4", 00:09:19.084 "strip_size_kb": 0, 00:09:19.084 "state": "configuring", 00:09:19.084 "raid_level": "raid1", 00:09:19.084 "superblock": true, 00:09:19.084 "num_base_bdevs": 3, 00:09:19.084 "num_base_bdevs_discovered": 1, 00:09:19.084 "num_base_bdevs_operational": 2, 00:09:19.084 "base_bdevs_list": [ 00:09:19.084 { 00:09:19.084 "name": null, 00:09:19.084 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:19.084 "is_configured": false, 00:09:19.084 "data_offset": 2048, 00:09:19.084 "data_size": 63488 00:09:19.084 }, 00:09:19.085 { 00:09:19.085 "name": "pt2", 00:09:19.085 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:19.085 "is_configured": true, 00:09:19.085 "data_offset": 2048, 00:09:19.085 "data_size": 63488 00:09:19.085 }, 00:09:19.085 { 00:09:19.085 "name": null, 00:09:19.085 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:19.085 "is_configured": false, 00:09:19.085 "data_offset": 2048, 00:09:19.085 "data_size": 63488 00:09:19.085 } 00:09:19.085 ] 00:09:19.085 }' 00:09:19.085 23:26:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:19.085 23:26:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.651 23:26:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:09:19.651 23:26:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:09:19.651 23:26:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=2 00:09:19.651 23:26:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:19.651 23:26:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:19.651 23:26:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.652 [2024-09-30 23:26:59.252232] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:19.652 [2024-09-30 23:26:59.252354] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:19.652 [2024-09-30 23:26:59.252393] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:09:19.652 [2024-09-30 23:26:59.252421] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:19.652 [2024-09-30 23:26:59.252849] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:19.652 [2024-09-30 23:26:59.252918] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:19.652 [2024-09-30 23:26:59.253024] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:09:19.652 [2024-09-30 23:26:59.253077] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:19.652 [2024-09-30 23:26:59.253197] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:09:19.652 [2024-09-30 23:26:59.253234] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:19.652 [2024-09-30 23:26:59.253507] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:09:19.652 [2024-09-30 23:26:59.253664] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:09:19.652 [2024-09-30 23:26:59.253705] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006d00 00:09:19.652 [2024-09-30 23:26:59.253849] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:19.652 pt3 00:09:19.652 23:26:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:19.652 23:26:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:09:19.652 23:26:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:19.652 23:26:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:19.652 23:26:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:19.652 23:26:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:19.652 23:26:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:19.652 23:26:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:19.652 23:26:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:19.652 23:26:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:19.652 23:26:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:19.652 23:26:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:19.652 23:26:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:19.652 23:26:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:19.652 23:26:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.652 23:26:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:19.652 23:26:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:19.652 "name": "raid_bdev1", 00:09:19.652 "uuid": "b4e56ef7-358e-4fc6-935e-e450e5ddaae4", 00:09:19.652 "strip_size_kb": 0, 00:09:19.652 "state": "online", 00:09:19.652 "raid_level": "raid1", 00:09:19.652 "superblock": true, 00:09:19.652 "num_base_bdevs": 3, 00:09:19.652 "num_base_bdevs_discovered": 2, 00:09:19.652 "num_base_bdevs_operational": 2, 00:09:19.652 "base_bdevs_list": [ 00:09:19.652 { 00:09:19.652 "name": null, 00:09:19.652 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:19.652 "is_configured": false, 00:09:19.652 "data_offset": 2048, 00:09:19.652 "data_size": 63488 00:09:19.652 }, 00:09:19.652 { 00:09:19.652 "name": "pt2", 00:09:19.652 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:19.652 "is_configured": true, 00:09:19.652 "data_offset": 2048, 00:09:19.652 "data_size": 63488 00:09:19.652 }, 00:09:19.652 { 00:09:19.652 "name": "pt3", 00:09:19.652 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:19.652 "is_configured": true, 00:09:19.652 "data_offset": 2048, 00:09:19.652 "data_size": 63488 00:09:19.652 } 00:09:19.652 ] 00:09:19.652 }' 00:09:19.652 23:26:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:19.652 23:26:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.911 23:26:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:19.911 23:26:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:19.911 23:26:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.911 [2024-09-30 23:26:59.707491] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:19.911 [2024-09-30 23:26:59.707577] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:19.911 [2024-09-30 23:26:59.707650] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:19.911 [2024-09-30 23:26:59.707707] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:19.911 [2024-09-30 23:26:59.707718] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name raid_bdev1, state offline 00:09:19.911 23:26:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:19.911 23:26:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:19.911 23:26:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:19.911 23:26:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:09:19.911 23:26:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.911 23:26:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:19.911 23:26:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:09:19.911 23:26:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:09:19.911 23:26:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 3 -gt 2 ']' 00:09:19.911 23:26:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # i=2 00:09:19.911 23:26:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt3 00:09:20.170 23:26:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:20.170 23:26:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.170 23:26:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:20.170 23:26:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:20.170 23:26:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:20.170 23:26:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.170 [2024-09-30 23:26:59.783331] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:20.170 [2024-09-30 23:26:59.783390] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:20.170 [2024-09-30 23:26:59.783405] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:09:20.170 [2024-09-30 23:26:59.783415] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:20.170 [2024-09-30 23:26:59.785487] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:20.170 [2024-09-30 23:26:59.785527] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:20.170 [2024-09-30 23:26:59.785591] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:09:20.170 [2024-09-30 23:26:59.785641] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:20.170 [2024-09-30 23:26:59.785734] bdev_raid.c:3675:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:09:20.170 [2024-09-30 23:26:59.785749] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:20.170 [2024-09-30 23:26:59.785767] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007080 name raid_bdev1, state configuring 00:09:20.170 [2024-09-30 23:26:59.785803] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:20.170 pt1 00:09:20.170 23:26:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:20.170 23:26:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 3 -gt 2 ']' 00:09:20.170 23:26:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:09:20.170 23:26:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:20.170 23:26:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:20.170 23:26:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:20.170 23:26:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:20.170 23:26:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:20.170 23:26:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:20.170 23:26:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:20.170 23:26:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:20.170 23:26:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:20.170 23:26:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:20.170 23:26:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:20.170 23:26:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:20.170 23:26:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.170 23:26:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:20.170 23:26:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:20.170 "name": "raid_bdev1", 00:09:20.170 "uuid": "b4e56ef7-358e-4fc6-935e-e450e5ddaae4", 00:09:20.170 "strip_size_kb": 0, 00:09:20.170 "state": "configuring", 00:09:20.170 "raid_level": "raid1", 00:09:20.170 "superblock": true, 00:09:20.170 "num_base_bdevs": 3, 00:09:20.170 "num_base_bdevs_discovered": 1, 00:09:20.170 "num_base_bdevs_operational": 2, 00:09:20.170 "base_bdevs_list": [ 00:09:20.170 { 00:09:20.170 "name": null, 00:09:20.170 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:20.170 "is_configured": false, 00:09:20.170 "data_offset": 2048, 00:09:20.170 "data_size": 63488 00:09:20.170 }, 00:09:20.170 { 00:09:20.170 "name": "pt2", 00:09:20.170 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:20.170 "is_configured": true, 00:09:20.170 "data_offset": 2048, 00:09:20.170 "data_size": 63488 00:09:20.170 }, 00:09:20.170 { 00:09:20.170 "name": null, 00:09:20.170 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:20.170 "is_configured": false, 00:09:20.170 "data_offset": 2048, 00:09:20.170 "data_size": 63488 00:09:20.170 } 00:09:20.170 ] 00:09:20.170 }' 00:09:20.170 23:26:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:20.170 23:26:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.430 23:27:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:09:20.430 23:27:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:09:20.430 23:27:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:20.430 23:27:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.430 23:27:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:20.430 23:27:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:09:20.430 23:27:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:20.430 23:27:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:20.430 23:27:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.430 [2024-09-30 23:27:00.262584] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:20.430 [2024-09-30 23:27:00.262692] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:20.430 [2024-09-30 23:27:00.262726] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:09:20.430 [2024-09-30 23:27:00.262754] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:20.430 [2024-09-30 23:27:00.263165] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:20.430 [2024-09-30 23:27:00.263231] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:20.430 [2024-09-30 23:27:00.263327] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:09:20.430 [2024-09-30 23:27:00.263397] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:20.430 [2024-09-30 23:27:00.263522] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007400 00:09:20.430 [2024-09-30 23:27:00.263561] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:20.430 [2024-09-30 23:27:00.263792] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:09:20.430 [2024-09-30 23:27:00.263961] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007400 00:09:20.430 [2024-09-30 23:27:00.264003] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007400 00:09:20.430 [2024-09-30 23:27:00.264139] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:20.430 pt3 00:09:20.430 23:27:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:20.430 23:27:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:09:20.430 23:27:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:20.430 23:27:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:20.430 23:27:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:20.430 23:27:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:20.430 23:27:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:20.430 23:27:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:20.430 23:27:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:20.430 23:27:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:20.430 23:27:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:20.430 23:27:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:20.430 23:27:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:20.430 23:27:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:20.430 23:27:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.689 23:27:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:20.689 23:27:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:20.689 "name": "raid_bdev1", 00:09:20.689 "uuid": "b4e56ef7-358e-4fc6-935e-e450e5ddaae4", 00:09:20.689 "strip_size_kb": 0, 00:09:20.689 "state": "online", 00:09:20.689 "raid_level": "raid1", 00:09:20.689 "superblock": true, 00:09:20.689 "num_base_bdevs": 3, 00:09:20.689 "num_base_bdevs_discovered": 2, 00:09:20.689 "num_base_bdevs_operational": 2, 00:09:20.689 "base_bdevs_list": [ 00:09:20.689 { 00:09:20.689 "name": null, 00:09:20.689 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:20.689 "is_configured": false, 00:09:20.689 "data_offset": 2048, 00:09:20.689 "data_size": 63488 00:09:20.689 }, 00:09:20.689 { 00:09:20.689 "name": "pt2", 00:09:20.689 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:20.689 "is_configured": true, 00:09:20.689 "data_offset": 2048, 00:09:20.689 "data_size": 63488 00:09:20.689 }, 00:09:20.689 { 00:09:20.689 "name": "pt3", 00:09:20.689 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:20.689 "is_configured": true, 00:09:20.689 "data_offset": 2048, 00:09:20.689 "data_size": 63488 00:09:20.689 } 00:09:20.689 ] 00:09:20.689 }' 00:09:20.689 23:27:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:20.689 23:27:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.949 23:27:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:09:20.949 23:27:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:20.949 23:27:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.949 23:27:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:09:20.949 23:27:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:20.949 23:27:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:09:20.949 23:27:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:20.949 23:27:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:09:20.949 23:27:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:20.949 23:27:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.949 [2024-09-30 23:27:00.694175] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:20.949 23:27:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:20.949 23:27:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' b4e56ef7-358e-4fc6-935e-e450e5ddaae4 '!=' b4e56ef7-358e-4fc6-935e-e450e5ddaae4 ']' 00:09:20.949 23:27:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 79705 00:09:20.949 23:27:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 79705 ']' 00:09:20.949 23:27:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 79705 00:09:20.949 23:27:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:09:20.949 23:27:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:20.949 23:27:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 79705 00:09:20.949 killing process with pid 79705 00:09:20.949 23:27:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:20.949 23:27:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:20.949 23:27:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 79705' 00:09:20.949 23:27:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 79705 00:09:20.949 [2024-09-30 23:27:00.772794] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:20.949 [2024-09-30 23:27:00.772900] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:20.949 [2024-09-30 23:27:00.772964] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:20.949 [2024-09-30 23:27:00.772974] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name raid_bdev1, state offline 00:09:20.949 23:27:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 79705 00:09:21.208 [2024-09-30 23:27:00.806131] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:21.208 23:27:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:09:21.208 00:09:21.208 real 0m6.565s 00:09:21.208 user 0m10.936s 00:09:21.208 sys 0m1.400s 00:09:21.208 23:27:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:21.208 23:27:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.208 ************************************ 00:09:21.208 END TEST raid_superblock_test 00:09:21.208 ************************************ 00:09:21.467 23:27:01 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 3 read 00:09:21.467 23:27:01 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:09:21.467 23:27:01 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:21.467 23:27:01 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:21.467 ************************************ 00:09:21.467 START TEST raid_read_error_test 00:09:21.467 ************************************ 00:09:21.467 23:27:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid1 3 read 00:09:21.467 23:27:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:09:21.467 23:27:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:09:21.467 23:27:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:09:21.467 23:27:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:21.467 23:27:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:21.467 23:27:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:21.467 23:27:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:21.467 23:27:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:21.467 23:27:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:21.467 23:27:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:21.467 23:27:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:21.467 23:27:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:09:21.467 23:27:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:21.467 23:27:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:21.467 23:27:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:21.467 23:27:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:21.467 23:27:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:21.467 23:27:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:21.467 23:27:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:21.467 23:27:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:21.467 23:27:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:21.467 23:27:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:09:21.467 23:27:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:09:21.467 23:27:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:21.467 23:27:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.dsb1LbN3fp 00:09:21.467 23:27:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=80144 00:09:21.467 23:27:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 80144 00:09:21.467 23:27:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:21.467 23:27:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 80144 ']' 00:09:21.467 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:21.467 23:27:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:21.467 23:27:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:21.467 23:27:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:21.467 23:27:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:21.467 23:27:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.467 [2024-09-30 23:27:01.235580] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:09:21.467 [2024-09-30 23:27:01.235721] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80144 ] 00:09:21.726 [2024-09-30 23:27:01.399063] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:21.726 [2024-09-30 23:27:01.447603] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:21.726 [2024-09-30 23:27:01.491077] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:21.726 [2024-09-30 23:27:01.491117] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:22.292 23:27:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:22.292 23:27:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:09:22.292 23:27:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:22.292 23:27:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:22.292 23:27:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:22.292 23:27:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.292 BaseBdev1_malloc 00:09:22.292 23:27:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:22.292 23:27:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:22.292 23:27:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:22.292 23:27:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.292 true 00:09:22.292 23:27:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:22.292 23:27:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:22.292 23:27:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:22.292 23:27:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.292 [2024-09-30 23:27:02.101622] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:22.292 [2024-09-30 23:27:02.101733] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:22.292 [2024-09-30 23:27:02.101770] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:09:22.292 [2024-09-30 23:27:02.101798] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:22.292 [2024-09-30 23:27:02.103893] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:22.292 [2024-09-30 23:27:02.103979] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:22.292 BaseBdev1 00:09:22.292 23:27:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:22.292 23:27:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:22.292 23:27:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:22.292 23:27:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:22.292 23:27:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.292 BaseBdev2_malloc 00:09:22.292 23:27:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:22.292 23:27:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:22.292 23:27:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:22.292 23:27:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.551 true 00:09:22.551 23:27:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:22.551 23:27:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:22.551 23:27:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:22.551 23:27:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.551 [2024-09-30 23:27:02.159505] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:22.551 [2024-09-30 23:27:02.159570] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:22.551 [2024-09-30 23:27:02.159596] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:09:22.551 [2024-09-30 23:27:02.159608] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:22.551 [2024-09-30 23:27:02.162439] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:22.551 [2024-09-30 23:27:02.162487] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:22.551 BaseBdev2 00:09:22.551 23:27:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:22.551 23:27:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:22.551 23:27:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:09:22.551 23:27:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:22.551 23:27:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.551 BaseBdev3_malloc 00:09:22.551 23:27:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:22.551 23:27:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:09:22.551 23:27:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:22.551 23:27:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.551 true 00:09:22.551 23:27:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:22.551 23:27:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:09:22.551 23:27:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:22.551 23:27:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.551 [2024-09-30 23:27:02.200046] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:09:22.551 [2024-09-30 23:27:02.200136] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:22.551 [2024-09-30 23:27:02.200173] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:09:22.551 [2024-09-30 23:27:02.200200] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:22.551 [2024-09-30 23:27:02.202180] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:22.551 [2024-09-30 23:27:02.202252] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:09:22.551 BaseBdev3 00:09:22.551 23:27:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:22.551 23:27:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:09:22.551 23:27:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:22.551 23:27:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.551 [2024-09-30 23:27:02.212086] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:22.551 [2024-09-30 23:27:02.213903] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:22.551 [2024-09-30 23:27:02.214033] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:22.551 [2024-09-30 23:27:02.214225] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:09:22.551 [2024-09-30 23:27:02.214282] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:22.551 [2024-09-30 23:27:02.214523] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:09:22.551 [2024-09-30 23:27:02.214702] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:09:22.551 [2024-09-30 23:27:02.214745] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006d00 00:09:22.551 [2024-09-30 23:27:02.214938] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:22.551 23:27:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:22.551 23:27:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:09:22.551 23:27:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:22.551 23:27:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:22.551 23:27:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:22.551 23:27:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:22.551 23:27:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:22.551 23:27:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:22.551 23:27:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:22.551 23:27:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:22.551 23:27:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:22.551 23:27:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:22.551 23:27:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:22.551 23:27:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:22.551 23:27:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.551 23:27:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:22.551 23:27:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:22.551 "name": "raid_bdev1", 00:09:22.551 "uuid": "c3319334-7fc5-4fea-b475-1df4ae32ad53", 00:09:22.551 "strip_size_kb": 0, 00:09:22.551 "state": "online", 00:09:22.551 "raid_level": "raid1", 00:09:22.551 "superblock": true, 00:09:22.551 "num_base_bdevs": 3, 00:09:22.551 "num_base_bdevs_discovered": 3, 00:09:22.551 "num_base_bdevs_operational": 3, 00:09:22.551 "base_bdevs_list": [ 00:09:22.551 { 00:09:22.551 "name": "BaseBdev1", 00:09:22.551 "uuid": "7c545ddc-7e93-50c0-9604-3ffe8f7343de", 00:09:22.551 "is_configured": true, 00:09:22.551 "data_offset": 2048, 00:09:22.551 "data_size": 63488 00:09:22.551 }, 00:09:22.551 { 00:09:22.551 "name": "BaseBdev2", 00:09:22.551 "uuid": "512c4103-87d4-5965-b8e2-4da57e2847fb", 00:09:22.551 "is_configured": true, 00:09:22.551 "data_offset": 2048, 00:09:22.551 "data_size": 63488 00:09:22.551 }, 00:09:22.551 { 00:09:22.551 "name": "BaseBdev3", 00:09:22.551 "uuid": "fcf2479d-ac5e-52a3-9efd-f89471a80a1e", 00:09:22.551 "is_configured": true, 00:09:22.551 "data_offset": 2048, 00:09:22.551 "data_size": 63488 00:09:22.551 } 00:09:22.551 ] 00:09:22.551 }' 00:09:22.551 23:27:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:22.551 23:27:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.119 23:27:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:23.119 23:27:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:23.119 [2024-09-30 23:27:02.759511] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:09:24.056 23:27:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:09:24.056 23:27:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:24.056 23:27:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.056 23:27:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:24.056 23:27:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:24.056 23:27:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:09:24.056 23:27:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:09:24.056 23:27:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:09:24.056 23:27:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:09:24.057 23:27:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:24.057 23:27:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:24.057 23:27:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:24.057 23:27:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:24.057 23:27:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:24.057 23:27:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:24.057 23:27:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:24.057 23:27:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:24.057 23:27:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:24.057 23:27:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:24.057 23:27:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:24.057 23:27:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:24.057 23:27:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.057 23:27:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:24.057 23:27:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:24.057 "name": "raid_bdev1", 00:09:24.057 "uuid": "c3319334-7fc5-4fea-b475-1df4ae32ad53", 00:09:24.057 "strip_size_kb": 0, 00:09:24.057 "state": "online", 00:09:24.057 "raid_level": "raid1", 00:09:24.057 "superblock": true, 00:09:24.057 "num_base_bdevs": 3, 00:09:24.057 "num_base_bdevs_discovered": 3, 00:09:24.057 "num_base_bdevs_operational": 3, 00:09:24.057 "base_bdevs_list": [ 00:09:24.057 { 00:09:24.057 "name": "BaseBdev1", 00:09:24.057 "uuid": "7c545ddc-7e93-50c0-9604-3ffe8f7343de", 00:09:24.057 "is_configured": true, 00:09:24.057 "data_offset": 2048, 00:09:24.057 "data_size": 63488 00:09:24.057 }, 00:09:24.057 { 00:09:24.057 "name": "BaseBdev2", 00:09:24.057 "uuid": "512c4103-87d4-5965-b8e2-4da57e2847fb", 00:09:24.057 "is_configured": true, 00:09:24.057 "data_offset": 2048, 00:09:24.057 "data_size": 63488 00:09:24.057 }, 00:09:24.057 { 00:09:24.057 "name": "BaseBdev3", 00:09:24.057 "uuid": "fcf2479d-ac5e-52a3-9efd-f89471a80a1e", 00:09:24.057 "is_configured": true, 00:09:24.057 "data_offset": 2048, 00:09:24.057 "data_size": 63488 00:09:24.057 } 00:09:24.057 ] 00:09:24.057 }' 00:09:24.057 23:27:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:24.057 23:27:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.350 23:27:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:24.350 23:27:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:24.350 23:27:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.350 [2024-09-30 23:27:04.158191] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:24.350 [2024-09-30 23:27:04.158229] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:24.350 [2024-09-30 23:27:04.160870] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:24.350 [2024-09-30 23:27:04.160943] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:24.350 [2024-09-30 23:27:04.161045] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:24.350 [2024-09-30 23:27:04.161057] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name raid_bdev1, state offline 00:09:24.350 { 00:09:24.350 "results": [ 00:09:24.350 { 00:09:24.350 "job": "raid_bdev1", 00:09:24.350 "core_mask": "0x1", 00:09:24.350 "workload": "randrw", 00:09:24.350 "percentage": 50, 00:09:24.350 "status": "finished", 00:09:24.350 "queue_depth": 1, 00:09:24.350 "io_size": 131072, 00:09:24.350 "runtime": 1.399576, 00:09:24.350 "iops": 14799.482128873316, 00:09:24.350 "mibps": 1849.9352661091646, 00:09:24.350 "io_failed": 0, 00:09:24.350 "io_timeout": 0, 00:09:24.350 "avg_latency_us": 65.09615626496196, 00:09:24.350 "min_latency_us": 21.799126637554586, 00:09:24.350 "max_latency_us": 1624.0908296943232 00:09:24.350 } 00:09:24.350 ], 00:09:24.350 "core_count": 1 00:09:24.350 } 00:09:24.350 23:27:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:24.350 23:27:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 80144 00:09:24.350 23:27:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 80144 ']' 00:09:24.350 23:27:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 80144 00:09:24.350 23:27:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:09:24.350 23:27:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:24.350 23:27:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 80144 00:09:24.618 killing process with pid 80144 00:09:24.618 23:27:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:24.618 23:27:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:24.618 23:27:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 80144' 00:09:24.618 23:27:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 80144 00:09:24.618 [2024-09-30 23:27:04.211465] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:24.618 23:27:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 80144 00:09:24.618 [2024-09-30 23:27:04.237477] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:24.882 23:27:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:24.882 23:27:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:24.882 23:27:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.dsb1LbN3fp 00:09:24.882 23:27:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:09:24.882 23:27:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:09:24.882 23:27:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:24.882 23:27:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:09:24.882 23:27:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:09:24.882 00:09:24.882 real 0m3.355s 00:09:24.882 user 0m4.223s 00:09:24.882 sys 0m0.581s 00:09:24.882 23:27:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:24.882 23:27:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.882 ************************************ 00:09:24.882 END TEST raid_read_error_test 00:09:24.882 ************************************ 00:09:24.882 23:27:04 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 3 write 00:09:24.882 23:27:04 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:09:24.882 23:27:04 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:24.882 23:27:04 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:24.882 ************************************ 00:09:24.882 START TEST raid_write_error_test 00:09:24.882 ************************************ 00:09:24.882 23:27:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid1 3 write 00:09:24.882 23:27:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:09:24.882 23:27:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:09:24.882 23:27:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:09:24.882 23:27:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:24.883 23:27:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:24.883 23:27:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:24.883 23:27:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:24.883 23:27:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:24.883 23:27:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:24.883 23:27:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:24.883 23:27:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:24.883 23:27:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:09:24.883 23:27:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:24.883 23:27:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:24.883 23:27:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:24.883 23:27:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:24.883 23:27:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:24.883 23:27:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:24.883 23:27:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:24.883 23:27:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:24.883 23:27:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:24.883 23:27:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:09:24.883 23:27:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:09:24.883 23:27:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:24.883 23:27:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.UFCHGVh9qM 00:09:24.883 23:27:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=80274 00:09:24.883 23:27:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 80274 00:09:24.883 23:27:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:24.883 23:27:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 80274 ']' 00:09:24.883 23:27:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:24.883 23:27:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:24.883 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:24.883 23:27:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:24.883 23:27:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:24.883 23:27:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.883 [2024-09-30 23:27:04.653978] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:09:24.883 [2024-09-30 23:27:04.654117] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80274 ] 00:09:25.142 [2024-09-30 23:27:04.796582] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:25.142 [2024-09-30 23:27:04.842472] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:25.142 [2024-09-30 23:27:04.884584] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:25.142 [2024-09-30 23:27:04.884628] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:25.711 23:27:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:25.711 23:27:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:09:25.711 23:27:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:25.711 23:27:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:25.711 23:27:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:25.711 23:27:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.711 BaseBdev1_malloc 00:09:25.711 23:27:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:25.711 23:27:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:25.711 23:27:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:25.711 23:27:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.711 true 00:09:25.711 23:27:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:25.711 23:27:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:25.711 23:27:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:25.711 23:27:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.711 [2024-09-30 23:27:05.515505] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:25.711 [2024-09-30 23:27:05.515566] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:25.711 [2024-09-30 23:27:05.515586] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:09:25.711 [2024-09-30 23:27:05.515596] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:25.711 [2024-09-30 23:27:05.517729] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:25.711 [2024-09-30 23:27:05.517767] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:25.711 BaseBdev1 00:09:25.711 23:27:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:25.711 23:27:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:25.711 23:27:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:25.711 23:27:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:25.711 23:27:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.711 BaseBdev2_malloc 00:09:25.711 23:27:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:25.711 23:27:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:25.711 23:27:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:25.711 23:27:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.711 true 00:09:25.711 23:27:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:25.711 23:27:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:25.712 23:27:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:25.712 23:27:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.972 [2024-09-30 23:27:05.565181] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:25.972 [2024-09-30 23:27:05.565233] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:25.972 [2024-09-30 23:27:05.565252] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:09:25.972 [2024-09-30 23:27:05.565260] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:25.972 [2024-09-30 23:27:05.567267] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:25.972 [2024-09-30 23:27:05.567305] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:25.972 BaseBdev2 00:09:25.972 23:27:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:25.972 23:27:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:25.972 23:27:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:09:25.972 23:27:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:25.972 23:27:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.972 BaseBdev3_malloc 00:09:25.972 23:27:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:25.972 23:27:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:09:25.972 23:27:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:25.972 23:27:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.972 true 00:09:25.972 23:27:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:25.972 23:27:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:09:25.972 23:27:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:25.972 23:27:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.972 [2024-09-30 23:27:05.605598] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:09:25.972 [2024-09-30 23:27:05.605650] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:25.972 [2024-09-30 23:27:05.605670] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:09:25.972 [2024-09-30 23:27:05.605680] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:25.972 [2024-09-30 23:27:05.607754] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:25.972 [2024-09-30 23:27:05.607794] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:09:25.972 BaseBdev3 00:09:25.972 23:27:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:25.972 23:27:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:09:25.972 23:27:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:25.972 23:27:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.972 [2024-09-30 23:27:05.617633] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:25.972 [2024-09-30 23:27:05.619486] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:25.972 [2024-09-30 23:27:05.619579] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:25.972 [2024-09-30 23:27:05.619765] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:09:25.972 [2024-09-30 23:27:05.619787] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:25.972 [2024-09-30 23:27:05.620053] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:09:25.972 [2024-09-30 23:27:05.620224] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:09:25.972 [2024-09-30 23:27:05.620241] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006d00 00:09:25.972 [2024-09-30 23:27:05.620381] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:25.972 23:27:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:25.972 23:27:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:09:25.972 23:27:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:25.972 23:27:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:25.972 23:27:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:25.972 23:27:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:25.972 23:27:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:25.972 23:27:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:25.972 23:27:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:25.972 23:27:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:25.972 23:27:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:25.972 23:27:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:25.972 23:27:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:25.972 23:27:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:25.972 23:27:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.972 23:27:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:25.972 23:27:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:25.972 "name": "raid_bdev1", 00:09:25.972 "uuid": "c8af2623-373b-4fde-843b-d329758c042b", 00:09:25.972 "strip_size_kb": 0, 00:09:25.972 "state": "online", 00:09:25.972 "raid_level": "raid1", 00:09:25.972 "superblock": true, 00:09:25.972 "num_base_bdevs": 3, 00:09:25.972 "num_base_bdevs_discovered": 3, 00:09:25.972 "num_base_bdevs_operational": 3, 00:09:25.972 "base_bdevs_list": [ 00:09:25.972 { 00:09:25.972 "name": "BaseBdev1", 00:09:25.972 "uuid": "ef694646-8c01-5b47-a86e-f06ceb9d3ac4", 00:09:25.972 "is_configured": true, 00:09:25.972 "data_offset": 2048, 00:09:25.972 "data_size": 63488 00:09:25.972 }, 00:09:25.972 { 00:09:25.972 "name": "BaseBdev2", 00:09:25.972 "uuid": "ee121d5e-ce94-5be6-84c0-33d039f84327", 00:09:25.972 "is_configured": true, 00:09:25.972 "data_offset": 2048, 00:09:25.972 "data_size": 63488 00:09:25.972 }, 00:09:25.972 { 00:09:25.972 "name": "BaseBdev3", 00:09:25.972 "uuid": "03d873e7-4091-5e5a-8ea5-fe07796b8cc7", 00:09:25.972 "is_configured": true, 00:09:25.972 "data_offset": 2048, 00:09:25.972 "data_size": 63488 00:09:25.972 } 00:09:25.972 ] 00:09:25.972 }' 00:09:25.972 23:27:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:25.972 23:27:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.232 23:27:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:26.232 23:27:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:26.491 [2024-09-30 23:27:06.157036] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:09:27.466 23:27:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:09:27.466 23:27:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:27.466 23:27:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.466 [2024-09-30 23:27:07.076000] bdev_raid.c:2272:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:09:27.466 [2024-09-30 23:27:07.076064] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:27.466 [2024-09-30 23:27:07.076284] bdev_raid.c:1970:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000005e10 00:09:27.466 23:27:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:27.466 23:27:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:27.466 23:27:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:09:27.466 23:27:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:09:27.466 23:27:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=2 00:09:27.466 23:27:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:09:27.466 23:27:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:27.466 23:27:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:27.466 23:27:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:27.466 23:27:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:27.466 23:27:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:27.466 23:27:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:27.466 23:27:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:27.466 23:27:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:27.466 23:27:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:27.466 23:27:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:27.466 23:27:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:27.466 23:27:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:27.466 23:27:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.466 23:27:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:27.466 23:27:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:27.466 "name": "raid_bdev1", 00:09:27.466 "uuid": "c8af2623-373b-4fde-843b-d329758c042b", 00:09:27.466 "strip_size_kb": 0, 00:09:27.466 "state": "online", 00:09:27.466 "raid_level": "raid1", 00:09:27.466 "superblock": true, 00:09:27.466 "num_base_bdevs": 3, 00:09:27.466 "num_base_bdevs_discovered": 2, 00:09:27.466 "num_base_bdevs_operational": 2, 00:09:27.466 "base_bdevs_list": [ 00:09:27.466 { 00:09:27.466 "name": null, 00:09:27.466 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:27.466 "is_configured": false, 00:09:27.466 "data_offset": 0, 00:09:27.466 "data_size": 63488 00:09:27.466 }, 00:09:27.466 { 00:09:27.466 "name": "BaseBdev2", 00:09:27.466 "uuid": "ee121d5e-ce94-5be6-84c0-33d039f84327", 00:09:27.466 "is_configured": true, 00:09:27.466 "data_offset": 2048, 00:09:27.466 "data_size": 63488 00:09:27.466 }, 00:09:27.466 { 00:09:27.466 "name": "BaseBdev3", 00:09:27.466 "uuid": "03d873e7-4091-5e5a-8ea5-fe07796b8cc7", 00:09:27.466 "is_configured": true, 00:09:27.466 "data_offset": 2048, 00:09:27.466 "data_size": 63488 00:09:27.466 } 00:09:27.466 ] 00:09:27.466 }' 00:09:27.466 23:27:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:27.466 23:27:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.724 23:27:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:27.724 23:27:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:27.724 23:27:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.724 [2024-09-30 23:27:07.521994] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:27.724 [2024-09-30 23:27:07.522033] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:27.724 [2024-09-30 23:27:07.524558] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:27.724 [2024-09-30 23:27:07.524613] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:27.724 [2024-09-30 23:27:07.524695] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:27.724 [2024-09-30 23:27:07.524705] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name raid_bdev1, state offline 00:09:27.724 { 00:09:27.724 "results": [ 00:09:27.724 { 00:09:27.724 "job": "raid_bdev1", 00:09:27.724 "core_mask": "0x1", 00:09:27.724 "workload": "randrw", 00:09:27.724 "percentage": 50, 00:09:27.724 "status": "finished", 00:09:27.724 "queue_depth": 1, 00:09:27.724 "io_size": 131072, 00:09:27.724 "runtime": 1.365796, 00:09:27.724 "iops": 16403.62103857384, 00:09:27.724 "mibps": 2050.45262982173, 00:09:27.724 "io_failed": 0, 00:09:27.724 "io_timeout": 0, 00:09:27.724 "avg_latency_us": 58.46128849417875, 00:09:27.724 "min_latency_us": 21.799126637554586, 00:09:27.724 "max_latency_us": 1366.5257641921398 00:09:27.724 } 00:09:27.724 ], 00:09:27.724 "core_count": 1 00:09:27.724 } 00:09:27.724 23:27:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:27.725 23:27:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 80274 00:09:27.725 23:27:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 80274 ']' 00:09:27.725 23:27:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 80274 00:09:27.725 23:27:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:09:27.725 23:27:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:27.725 23:27:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 80274 00:09:27.725 23:27:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:27.725 23:27:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:27.725 killing process with pid 80274 00:09:27.725 23:27:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 80274' 00:09:27.725 23:27:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 80274 00:09:27.725 [2024-09-30 23:27:07.572166] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:27.725 23:27:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 80274 00:09:27.982 [2024-09-30 23:27:07.598294] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:27.982 23:27:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.UFCHGVh9qM 00:09:27.982 23:27:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:27.982 23:27:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:28.241 23:27:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:09:28.241 23:27:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:09:28.241 23:27:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:28.241 23:27:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:09:28.241 23:27:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:09:28.241 00:09:28.241 real 0m3.286s 00:09:28.241 user 0m4.163s 00:09:28.241 sys 0m0.531s 00:09:28.241 23:27:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:28.241 23:27:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.241 ************************************ 00:09:28.241 END TEST raid_write_error_test 00:09:28.241 ************************************ 00:09:28.241 23:27:07 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:09:28.241 23:27:07 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:09:28.241 23:27:07 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 4 false 00:09:28.241 23:27:07 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:09:28.241 23:27:07 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:28.241 23:27:07 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:28.241 ************************************ 00:09:28.241 START TEST raid_state_function_test 00:09:28.241 ************************************ 00:09:28.241 23:27:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test raid0 4 false 00:09:28.241 23:27:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:09:28.241 23:27:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:09:28.241 23:27:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:09:28.241 23:27:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:28.241 23:27:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:28.241 23:27:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:28.241 23:27:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:28.241 23:27:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:28.241 23:27:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:28.241 23:27:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:28.241 23:27:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:28.241 23:27:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:28.241 23:27:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:09:28.241 23:27:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:28.241 23:27:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:28.241 23:27:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:09:28.241 23:27:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:28.241 23:27:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:28.241 23:27:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:09:28.241 23:27:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:28.241 23:27:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:28.241 23:27:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:28.241 23:27:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:28.241 23:27:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:28.241 23:27:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:09:28.241 23:27:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:09:28.241 23:27:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:09:28.241 23:27:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:09:28.241 23:27:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:09:28.241 23:27:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=80406 00:09:28.241 23:27:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:28.241 23:27:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 80406' 00:09:28.241 Process raid pid: 80406 00:09:28.241 23:27:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 80406 00:09:28.241 23:27:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 80406 ']' 00:09:28.241 23:27:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:28.241 23:27:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:28.241 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:28.241 23:27:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:28.241 23:27:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:28.241 23:27:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.241 [2024-09-30 23:27:08.009134] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:09:28.241 [2024-09-30 23:27:08.009619] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:28.499 [2024-09-30 23:27:08.172210] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:28.499 [2024-09-30 23:27:08.216810] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:28.499 [2024-09-30 23:27:08.259134] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:28.499 [2024-09-30 23:27:08.259173] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:29.064 23:27:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:29.064 23:27:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:09:29.064 23:27:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:09:29.064 23:27:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:29.064 23:27:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.065 [2024-09-30 23:27:08.840514] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:29.065 [2024-09-30 23:27:08.840563] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:29.065 [2024-09-30 23:27:08.840575] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:29.065 [2024-09-30 23:27:08.840584] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:29.065 [2024-09-30 23:27:08.840590] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:29.065 [2024-09-30 23:27:08.840603] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:29.065 [2024-09-30 23:27:08.840609] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:09:29.065 [2024-09-30 23:27:08.840617] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:09:29.065 23:27:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:29.065 23:27:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:29.065 23:27:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:29.065 23:27:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:29.065 23:27:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:29.065 23:27:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:29.065 23:27:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:29.065 23:27:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:29.065 23:27:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:29.065 23:27:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:29.065 23:27:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:29.065 23:27:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:29.065 23:27:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:29.065 23:27:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:29.065 23:27:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.065 23:27:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:29.065 23:27:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:29.065 "name": "Existed_Raid", 00:09:29.065 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:29.065 "strip_size_kb": 64, 00:09:29.065 "state": "configuring", 00:09:29.065 "raid_level": "raid0", 00:09:29.065 "superblock": false, 00:09:29.065 "num_base_bdevs": 4, 00:09:29.065 "num_base_bdevs_discovered": 0, 00:09:29.065 "num_base_bdevs_operational": 4, 00:09:29.065 "base_bdevs_list": [ 00:09:29.065 { 00:09:29.065 "name": "BaseBdev1", 00:09:29.065 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:29.065 "is_configured": false, 00:09:29.065 "data_offset": 0, 00:09:29.065 "data_size": 0 00:09:29.065 }, 00:09:29.065 { 00:09:29.065 "name": "BaseBdev2", 00:09:29.065 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:29.065 "is_configured": false, 00:09:29.065 "data_offset": 0, 00:09:29.065 "data_size": 0 00:09:29.065 }, 00:09:29.065 { 00:09:29.065 "name": "BaseBdev3", 00:09:29.065 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:29.065 "is_configured": false, 00:09:29.065 "data_offset": 0, 00:09:29.065 "data_size": 0 00:09:29.065 }, 00:09:29.065 { 00:09:29.065 "name": "BaseBdev4", 00:09:29.065 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:29.065 "is_configured": false, 00:09:29.065 "data_offset": 0, 00:09:29.065 "data_size": 0 00:09:29.065 } 00:09:29.065 ] 00:09:29.065 }' 00:09:29.065 23:27:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:29.065 23:27:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.630 23:27:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:29.630 23:27:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:29.630 23:27:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.630 [2024-09-30 23:27:09.279662] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:29.630 [2024-09-30 23:27:09.279708] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring 00:09:29.630 23:27:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:29.630 23:27:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:09:29.630 23:27:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:29.630 23:27:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.630 [2024-09-30 23:27:09.291676] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:29.630 [2024-09-30 23:27:09.291718] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:29.630 [2024-09-30 23:27:09.291726] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:29.630 [2024-09-30 23:27:09.291735] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:29.630 [2024-09-30 23:27:09.291741] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:29.630 [2024-09-30 23:27:09.291750] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:29.630 [2024-09-30 23:27:09.291756] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:09:29.630 [2024-09-30 23:27:09.291764] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:09:29.630 23:27:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:29.630 23:27:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:29.630 23:27:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:29.630 23:27:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.630 [2024-09-30 23:27:09.312549] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:29.630 BaseBdev1 00:09:29.630 23:27:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:29.630 23:27:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:09:29.630 23:27:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:09:29.630 23:27:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:29.630 23:27:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:29.630 23:27:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:29.630 23:27:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:29.630 23:27:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:29.630 23:27:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:29.630 23:27:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.631 23:27:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:29.631 23:27:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:29.631 23:27:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:29.631 23:27:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.631 [ 00:09:29.631 { 00:09:29.631 "name": "BaseBdev1", 00:09:29.631 "aliases": [ 00:09:29.631 "5407c88c-f052-4bc1-b2f0-5ee8eedd8958" 00:09:29.631 ], 00:09:29.631 "product_name": "Malloc disk", 00:09:29.631 "block_size": 512, 00:09:29.631 "num_blocks": 65536, 00:09:29.631 "uuid": "5407c88c-f052-4bc1-b2f0-5ee8eedd8958", 00:09:29.631 "assigned_rate_limits": { 00:09:29.631 "rw_ios_per_sec": 0, 00:09:29.631 "rw_mbytes_per_sec": 0, 00:09:29.631 "r_mbytes_per_sec": 0, 00:09:29.631 "w_mbytes_per_sec": 0 00:09:29.631 }, 00:09:29.631 "claimed": true, 00:09:29.631 "claim_type": "exclusive_write", 00:09:29.631 "zoned": false, 00:09:29.631 "supported_io_types": { 00:09:29.631 "read": true, 00:09:29.631 "write": true, 00:09:29.631 "unmap": true, 00:09:29.631 "flush": true, 00:09:29.631 "reset": true, 00:09:29.631 "nvme_admin": false, 00:09:29.631 "nvme_io": false, 00:09:29.631 "nvme_io_md": false, 00:09:29.631 "write_zeroes": true, 00:09:29.631 "zcopy": true, 00:09:29.631 "get_zone_info": false, 00:09:29.631 "zone_management": false, 00:09:29.631 "zone_append": false, 00:09:29.631 "compare": false, 00:09:29.631 "compare_and_write": false, 00:09:29.631 "abort": true, 00:09:29.631 "seek_hole": false, 00:09:29.631 "seek_data": false, 00:09:29.631 "copy": true, 00:09:29.631 "nvme_iov_md": false 00:09:29.631 }, 00:09:29.631 "memory_domains": [ 00:09:29.631 { 00:09:29.631 "dma_device_id": "system", 00:09:29.631 "dma_device_type": 1 00:09:29.631 }, 00:09:29.631 { 00:09:29.631 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:29.631 "dma_device_type": 2 00:09:29.631 } 00:09:29.631 ], 00:09:29.631 "driver_specific": {} 00:09:29.631 } 00:09:29.631 ] 00:09:29.631 23:27:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:29.631 23:27:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:29.631 23:27:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:29.631 23:27:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:29.631 23:27:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:29.631 23:27:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:29.631 23:27:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:29.631 23:27:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:29.631 23:27:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:29.631 23:27:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:29.631 23:27:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:29.631 23:27:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:29.631 23:27:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:29.631 23:27:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:29.631 23:27:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.631 23:27:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:29.631 23:27:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:29.631 23:27:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:29.631 "name": "Existed_Raid", 00:09:29.631 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:29.631 "strip_size_kb": 64, 00:09:29.631 "state": "configuring", 00:09:29.631 "raid_level": "raid0", 00:09:29.631 "superblock": false, 00:09:29.631 "num_base_bdevs": 4, 00:09:29.631 "num_base_bdevs_discovered": 1, 00:09:29.631 "num_base_bdevs_operational": 4, 00:09:29.631 "base_bdevs_list": [ 00:09:29.631 { 00:09:29.631 "name": "BaseBdev1", 00:09:29.631 "uuid": "5407c88c-f052-4bc1-b2f0-5ee8eedd8958", 00:09:29.631 "is_configured": true, 00:09:29.631 "data_offset": 0, 00:09:29.631 "data_size": 65536 00:09:29.631 }, 00:09:29.631 { 00:09:29.631 "name": "BaseBdev2", 00:09:29.631 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:29.631 "is_configured": false, 00:09:29.631 "data_offset": 0, 00:09:29.631 "data_size": 0 00:09:29.631 }, 00:09:29.631 { 00:09:29.631 "name": "BaseBdev3", 00:09:29.631 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:29.631 "is_configured": false, 00:09:29.631 "data_offset": 0, 00:09:29.631 "data_size": 0 00:09:29.631 }, 00:09:29.631 { 00:09:29.631 "name": "BaseBdev4", 00:09:29.631 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:29.631 "is_configured": false, 00:09:29.631 "data_offset": 0, 00:09:29.631 "data_size": 0 00:09:29.631 } 00:09:29.631 ] 00:09:29.631 }' 00:09:29.631 23:27:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:29.631 23:27:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.197 23:27:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:30.197 23:27:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:30.197 23:27:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.197 [2024-09-30 23:27:09.795776] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:30.197 [2024-09-30 23:27:09.795828] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring 00:09:30.197 23:27:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:30.197 23:27:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:09:30.197 23:27:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:30.197 23:27:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.197 [2024-09-30 23:27:09.803799] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:30.197 [2024-09-30 23:27:09.805631] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:30.197 [2024-09-30 23:27:09.805673] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:30.197 [2024-09-30 23:27:09.805682] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:30.197 [2024-09-30 23:27:09.805691] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:30.197 [2024-09-30 23:27:09.805697] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:09:30.197 [2024-09-30 23:27:09.805705] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:09:30.197 23:27:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:30.197 23:27:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:09:30.197 23:27:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:30.197 23:27:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:30.197 23:27:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:30.197 23:27:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:30.197 23:27:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:30.197 23:27:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:30.197 23:27:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:30.197 23:27:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:30.197 23:27:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:30.197 23:27:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:30.197 23:27:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:30.197 23:27:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:30.197 23:27:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:30.197 23:27:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:30.197 23:27:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.197 23:27:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:30.197 23:27:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:30.197 "name": "Existed_Raid", 00:09:30.197 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:30.197 "strip_size_kb": 64, 00:09:30.197 "state": "configuring", 00:09:30.197 "raid_level": "raid0", 00:09:30.197 "superblock": false, 00:09:30.197 "num_base_bdevs": 4, 00:09:30.197 "num_base_bdevs_discovered": 1, 00:09:30.197 "num_base_bdevs_operational": 4, 00:09:30.197 "base_bdevs_list": [ 00:09:30.197 { 00:09:30.197 "name": "BaseBdev1", 00:09:30.197 "uuid": "5407c88c-f052-4bc1-b2f0-5ee8eedd8958", 00:09:30.197 "is_configured": true, 00:09:30.197 "data_offset": 0, 00:09:30.197 "data_size": 65536 00:09:30.197 }, 00:09:30.197 { 00:09:30.197 "name": "BaseBdev2", 00:09:30.197 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:30.197 "is_configured": false, 00:09:30.197 "data_offset": 0, 00:09:30.197 "data_size": 0 00:09:30.197 }, 00:09:30.197 { 00:09:30.197 "name": "BaseBdev3", 00:09:30.197 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:30.197 "is_configured": false, 00:09:30.197 "data_offset": 0, 00:09:30.197 "data_size": 0 00:09:30.197 }, 00:09:30.197 { 00:09:30.197 "name": "BaseBdev4", 00:09:30.197 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:30.197 "is_configured": false, 00:09:30.197 "data_offset": 0, 00:09:30.197 "data_size": 0 00:09:30.197 } 00:09:30.197 ] 00:09:30.197 }' 00:09:30.197 23:27:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:30.197 23:27:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.456 23:27:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:30.456 23:27:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:30.456 23:27:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.456 [2024-09-30 23:27:10.294357] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:30.456 BaseBdev2 00:09:30.456 23:27:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:30.456 23:27:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:09:30.456 23:27:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:09:30.456 23:27:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:30.456 23:27:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:30.456 23:27:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:30.456 23:27:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:30.456 23:27:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:30.456 23:27:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:30.456 23:27:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.456 23:27:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:30.456 23:27:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:30.456 23:27:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:30.456 23:27:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.715 [ 00:09:30.715 { 00:09:30.715 "name": "BaseBdev2", 00:09:30.715 "aliases": [ 00:09:30.715 "076711be-7ca7-4a72-8f69-a6494ef6dd63" 00:09:30.715 ], 00:09:30.715 "product_name": "Malloc disk", 00:09:30.715 "block_size": 512, 00:09:30.715 "num_blocks": 65536, 00:09:30.715 "uuid": "076711be-7ca7-4a72-8f69-a6494ef6dd63", 00:09:30.715 "assigned_rate_limits": { 00:09:30.715 "rw_ios_per_sec": 0, 00:09:30.715 "rw_mbytes_per_sec": 0, 00:09:30.715 "r_mbytes_per_sec": 0, 00:09:30.715 "w_mbytes_per_sec": 0 00:09:30.715 }, 00:09:30.715 "claimed": true, 00:09:30.715 "claim_type": "exclusive_write", 00:09:30.715 "zoned": false, 00:09:30.715 "supported_io_types": { 00:09:30.715 "read": true, 00:09:30.715 "write": true, 00:09:30.715 "unmap": true, 00:09:30.715 "flush": true, 00:09:30.715 "reset": true, 00:09:30.715 "nvme_admin": false, 00:09:30.715 "nvme_io": false, 00:09:30.715 "nvme_io_md": false, 00:09:30.715 "write_zeroes": true, 00:09:30.715 "zcopy": true, 00:09:30.715 "get_zone_info": false, 00:09:30.715 "zone_management": false, 00:09:30.715 "zone_append": false, 00:09:30.715 "compare": false, 00:09:30.715 "compare_and_write": false, 00:09:30.715 "abort": true, 00:09:30.715 "seek_hole": false, 00:09:30.715 "seek_data": false, 00:09:30.715 "copy": true, 00:09:30.715 "nvme_iov_md": false 00:09:30.715 }, 00:09:30.715 "memory_domains": [ 00:09:30.715 { 00:09:30.715 "dma_device_id": "system", 00:09:30.715 "dma_device_type": 1 00:09:30.715 }, 00:09:30.715 { 00:09:30.715 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:30.715 "dma_device_type": 2 00:09:30.715 } 00:09:30.715 ], 00:09:30.715 "driver_specific": {} 00:09:30.715 } 00:09:30.715 ] 00:09:30.715 23:27:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:30.715 23:27:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:30.715 23:27:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:30.715 23:27:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:30.715 23:27:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:30.715 23:27:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:30.715 23:27:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:30.715 23:27:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:30.715 23:27:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:30.715 23:27:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:30.715 23:27:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:30.715 23:27:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:30.715 23:27:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:30.715 23:27:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:30.715 23:27:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:30.715 23:27:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:30.715 23:27:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.715 23:27:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:30.715 23:27:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:30.715 23:27:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:30.715 "name": "Existed_Raid", 00:09:30.715 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:30.715 "strip_size_kb": 64, 00:09:30.715 "state": "configuring", 00:09:30.715 "raid_level": "raid0", 00:09:30.715 "superblock": false, 00:09:30.715 "num_base_bdevs": 4, 00:09:30.715 "num_base_bdevs_discovered": 2, 00:09:30.715 "num_base_bdevs_operational": 4, 00:09:30.715 "base_bdevs_list": [ 00:09:30.715 { 00:09:30.715 "name": "BaseBdev1", 00:09:30.715 "uuid": "5407c88c-f052-4bc1-b2f0-5ee8eedd8958", 00:09:30.715 "is_configured": true, 00:09:30.715 "data_offset": 0, 00:09:30.715 "data_size": 65536 00:09:30.715 }, 00:09:30.715 { 00:09:30.715 "name": "BaseBdev2", 00:09:30.716 "uuid": "076711be-7ca7-4a72-8f69-a6494ef6dd63", 00:09:30.716 "is_configured": true, 00:09:30.716 "data_offset": 0, 00:09:30.716 "data_size": 65536 00:09:30.716 }, 00:09:30.716 { 00:09:30.716 "name": "BaseBdev3", 00:09:30.716 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:30.716 "is_configured": false, 00:09:30.716 "data_offset": 0, 00:09:30.716 "data_size": 0 00:09:30.716 }, 00:09:30.716 { 00:09:30.716 "name": "BaseBdev4", 00:09:30.716 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:30.716 "is_configured": false, 00:09:30.716 "data_offset": 0, 00:09:30.716 "data_size": 0 00:09:30.716 } 00:09:30.716 ] 00:09:30.716 }' 00:09:30.716 23:27:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:30.716 23:27:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.974 23:27:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:30.974 23:27:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:30.974 23:27:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.974 [2024-09-30 23:27:10.712655] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:30.974 BaseBdev3 00:09:30.974 23:27:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:30.974 23:27:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:09:30.974 23:27:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:09:30.974 23:27:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:30.975 23:27:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:30.975 23:27:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:30.975 23:27:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:30.975 23:27:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:30.975 23:27:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:30.975 23:27:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.975 23:27:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:30.975 23:27:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:30.975 23:27:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:30.975 23:27:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.975 [ 00:09:30.975 { 00:09:30.975 "name": "BaseBdev3", 00:09:30.975 "aliases": [ 00:09:30.975 "52cb24ca-6424-4190-a692-d2178ebac9c3" 00:09:30.975 ], 00:09:30.975 "product_name": "Malloc disk", 00:09:30.975 "block_size": 512, 00:09:30.975 "num_blocks": 65536, 00:09:30.975 "uuid": "52cb24ca-6424-4190-a692-d2178ebac9c3", 00:09:30.975 "assigned_rate_limits": { 00:09:30.975 "rw_ios_per_sec": 0, 00:09:30.975 "rw_mbytes_per_sec": 0, 00:09:30.975 "r_mbytes_per_sec": 0, 00:09:30.975 "w_mbytes_per_sec": 0 00:09:30.975 }, 00:09:30.975 "claimed": true, 00:09:30.975 "claim_type": "exclusive_write", 00:09:30.975 "zoned": false, 00:09:30.975 "supported_io_types": { 00:09:30.975 "read": true, 00:09:30.975 "write": true, 00:09:30.975 "unmap": true, 00:09:30.975 "flush": true, 00:09:30.975 "reset": true, 00:09:30.975 "nvme_admin": false, 00:09:30.975 "nvme_io": false, 00:09:30.975 "nvme_io_md": false, 00:09:30.975 "write_zeroes": true, 00:09:30.975 "zcopy": true, 00:09:30.975 "get_zone_info": false, 00:09:30.975 "zone_management": false, 00:09:30.975 "zone_append": false, 00:09:30.975 "compare": false, 00:09:30.975 "compare_and_write": false, 00:09:30.975 "abort": true, 00:09:30.975 "seek_hole": false, 00:09:30.975 "seek_data": false, 00:09:30.975 "copy": true, 00:09:30.975 "nvme_iov_md": false 00:09:30.975 }, 00:09:30.975 "memory_domains": [ 00:09:30.975 { 00:09:30.975 "dma_device_id": "system", 00:09:30.975 "dma_device_type": 1 00:09:30.975 }, 00:09:30.975 { 00:09:30.975 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:30.975 "dma_device_type": 2 00:09:30.975 } 00:09:30.975 ], 00:09:30.975 "driver_specific": {} 00:09:30.975 } 00:09:30.975 ] 00:09:30.975 23:27:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:30.975 23:27:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:30.975 23:27:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:30.975 23:27:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:30.975 23:27:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:30.975 23:27:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:30.975 23:27:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:30.975 23:27:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:30.975 23:27:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:30.975 23:27:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:30.975 23:27:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:30.975 23:27:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:30.975 23:27:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:30.975 23:27:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:30.975 23:27:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:30.975 23:27:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:30.975 23:27:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:30.975 23:27:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.975 23:27:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:30.975 23:27:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:30.975 "name": "Existed_Raid", 00:09:30.975 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:30.975 "strip_size_kb": 64, 00:09:30.975 "state": "configuring", 00:09:30.975 "raid_level": "raid0", 00:09:30.975 "superblock": false, 00:09:30.975 "num_base_bdevs": 4, 00:09:30.975 "num_base_bdevs_discovered": 3, 00:09:30.975 "num_base_bdevs_operational": 4, 00:09:30.975 "base_bdevs_list": [ 00:09:30.975 { 00:09:30.975 "name": "BaseBdev1", 00:09:30.975 "uuid": "5407c88c-f052-4bc1-b2f0-5ee8eedd8958", 00:09:30.975 "is_configured": true, 00:09:30.975 "data_offset": 0, 00:09:30.975 "data_size": 65536 00:09:30.975 }, 00:09:30.975 { 00:09:30.975 "name": "BaseBdev2", 00:09:30.975 "uuid": "076711be-7ca7-4a72-8f69-a6494ef6dd63", 00:09:30.975 "is_configured": true, 00:09:30.975 "data_offset": 0, 00:09:30.975 "data_size": 65536 00:09:30.975 }, 00:09:30.975 { 00:09:30.975 "name": "BaseBdev3", 00:09:30.975 "uuid": "52cb24ca-6424-4190-a692-d2178ebac9c3", 00:09:30.975 "is_configured": true, 00:09:30.975 "data_offset": 0, 00:09:30.975 "data_size": 65536 00:09:30.975 }, 00:09:30.975 { 00:09:30.975 "name": "BaseBdev4", 00:09:30.975 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:30.975 "is_configured": false, 00:09:30.975 "data_offset": 0, 00:09:30.975 "data_size": 0 00:09:30.975 } 00:09:30.975 ] 00:09:30.975 }' 00:09:30.975 23:27:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:30.975 23:27:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.544 23:27:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:09:31.544 23:27:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:31.544 23:27:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.544 [2024-09-30 23:27:11.182826] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:09:31.544 [2024-09-30 23:27:11.182877] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:09:31.544 [2024-09-30 23:27:11.182887] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:09:31.544 [2024-09-30 23:27:11.183206] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:09:31.544 [2024-09-30 23:27:11.183351] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:09:31.544 [2024-09-30 23:27:11.183364] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980 00:09:31.544 [2024-09-30 23:27:11.183570] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:31.544 BaseBdev4 00:09:31.544 23:27:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:31.544 23:27:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:09:31.544 23:27:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:09:31.544 23:27:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:31.544 23:27:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:31.544 23:27:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:31.544 23:27:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:31.544 23:27:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:31.544 23:27:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:31.544 23:27:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.544 23:27:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:31.544 23:27:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:09:31.544 23:27:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:31.544 23:27:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.544 [ 00:09:31.544 { 00:09:31.544 "name": "BaseBdev4", 00:09:31.544 "aliases": [ 00:09:31.544 "1776b099-ec70-49b2-92d3-3c1064561e1b" 00:09:31.544 ], 00:09:31.544 "product_name": "Malloc disk", 00:09:31.544 "block_size": 512, 00:09:31.544 "num_blocks": 65536, 00:09:31.544 "uuid": "1776b099-ec70-49b2-92d3-3c1064561e1b", 00:09:31.544 "assigned_rate_limits": { 00:09:31.544 "rw_ios_per_sec": 0, 00:09:31.544 "rw_mbytes_per_sec": 0, 00:09:31.544 "r_mbytes_per_sec": 0, 00:09:31.544 "w_mbytes_per_sec": 0 00:09:31.544 }, 00:09:31.544 "claimed": true, 00:09:31.544 "claim_type": "exclusive_write", 00:09:31.544 "zoned": false, 00:09:31.544 "supported_io_types": { 00:09:31.544 "read": true, 00:09:31.544 "write": true, 00:09:31.544 "unmap": true, 00:09:31.544 "flush": true, 00:09:31.544 "reset": true, 00:09:31.544 "nvme_admin": false, 00:09:31.544 "nvme_io": false, 00:09:31.544 "nvme_io_md": false, 00:09:31.544 "write_zeroes": true, 00:09:31.544 "zcopy": true, 00:09:31.544 "get_zone_info": false, 00:09:31.544 "zone_management": false, 00:09:31.544 "zone_append": false, 00:09:31.544 "compare": false, 00:09:31.544 "compare_and_write": false, 00:09:31.544 "abort": true, 00:09:31.544 "seek_hole": false, 00:09:31.544 "seek_data": false, 00:09:31.544 "copy": true, 00:09:31.544 "nvme_iov_md": false 00:09:31.544 }, 00:09:31.544 "memory_domains": [ 00:09:31.544 { 00:09:31.544 "dma_device_id": "system", 00:09:31.544 "dma_device_type": 1 00:09:31.544 }, 00:09:31.544 { 00:09:31.544 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:31.544 "dma_device_type": 2 00:09:31.544 } 00:09:31.544 ], 00:09:31.544 "driver_specific": {} 00:09:31.544 } 00:09:31.544 ] 00:09:31.544 23:27:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:31.544 23:27:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:31.544 23:27:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:31.544 23:27:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:31.544 23:27:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:09:31.544 23:27:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:31.544 23:27:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:31.544 23:27:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:31.544 23:27:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:31.544 23:27:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:31.544 23:27:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:31.544 23:27:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:31.544 23:27:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:31.544 23:27:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:31.544 23:27:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:31.544 23:27:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:31.544 23:27:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:31.544 23:27:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.544 23:27:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:31.544 23:27:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:31.544 "name": "Existed_Raid", 00:09:31.544 "uuid": "ba76fc49-9f42-47f2-8ced-84d2dd422386", 00:09:31.544 "strip_size_kb": 64, 00:09:31.544 "state": "online", 00:09:31.544 "raid_level": "raid0", 00:09:31.544 "superblock": false, 00:09:31.544 "num_base_bdevs": 4, 00:09:31.544 "num_base_bdevs_discovered": 4, 00:09:31.544 "num_base_bdevs_operational": 4, 00:09:31.544 "base_bdevs_list": [ 00:09:31.544 { 00:09:31.544 "name": "BaseBdev1", 00:09:31.544 "uuid": "5407c88c-f052-4bc1-b2f0-5ee8eedd8958", 00:09:31.544 "is_configured": true, 00:09:31.544 "data_offset": 0, 00:09:31.544 "data_size": 65536 00:09:31.544 }, 00:09:31.544 { 00:09:31.544 "name": "BaseBdev2", 00:09:31.544 "uuid": "076711be-7ca7-4a72-8f69-a6494ef6dd63", 00:09:31.544 "is_configured": true, 00:09:31.544 "data_offset": 0, 00:09:31.544 "data_size": 65536 00:09:31.544 }, 00:09:31.544 { 00:09:31.544 "name": "BaseBdev3", 00:09:31.544 "uuid": "52cb24ca-6424-4190-a692-d2178ebac9c3", 00:09:31.544 "is_configured": true, 00:09:31.544 "data_offset": 0, 00:09:31.544 "data_size": 65536 00:09:31.544 }, 00:09:31.544 { 00:09:31.544 "name": "BaseBdev4", 00:09:31.544 "uuid": "1776b099-ec70-49b2-92d3-3c1064561e1b", 00:09:31.544 "is_configured": true, 00:09:31.545 "data_offset": 0, 00:09:31.545 "data_size": 65536 00:09:31.545 } 00:09:31.545 ] 00:09:31.545 }' 00:09:31.545 23:27:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:31.545 23:27:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.113 23:27:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:09:32.113 23:27:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:32.113 23:27:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:32.113 23:27:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:32.113 23:27:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:32.113 23:27:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:32.113 23:27:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:32.113 23:27:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:32.113 23:27:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:32.113 23:27:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.113 [2024-09-30 23:27:11.698278] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:32.113 23:27:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:32.113 23:27:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:32.113 "name": "Existed_Raid", 00:09:32.113 "aliases": [ 00:09:32.113 "ba76fc49-9f42-47f2-8ced-84d2dd422386" 00:09:32.113 ], 00:09:32.113 "product_name": "Raid Volume", 00:09:32.113 "block_size": 512, 00:09:32.113 "num_blocks": 262144, 00:09:32.113 "uuid": "ba76fc49-9f42-47f2-8ced-84d2dd422386", 00:09:32.113 "assigned_rate_limits": { 00:09:32.113 "rw_ios_per_sec": 0, 00:09:32.113 "rw_mbytes_per_sec": 0, 00:09:32.113 "r_mbytes_per_sec": 0, 00:09:32.113 "w_mbytes_per_sec": 0 00:09:32.113 }, 00:09:32.113 "claimed": false, 00:09:32.113 "zoned": false, 00:09:32.113 "supported_io_types": { 00:09:32.113 "read": true, 00:09:32.113 "write": true, 00:09:32.113 "unmap": true, 00:09:32.113 "flush": true, 00:09:32.113 "reset": true, 00:09:32.113 "nvme_admin": false, 00:09:32.113 "nvme_io": false, 00:09:32.113 "nvme_io_md": false, 00:09:32.113 "write_zeroes": true, 00:09:32.113 "zcopy": false, 00:09:32.113 "get_zone_info": false, 00:09:32.113 "zone_management": false, 00:09:32.113 "zone_append": false, 00:09:32.113 "compare": false, 00:09:32.113 "compare_and_write": false, 00:09:32.113 "abort": false, 00:09:32.113 "seek_hole": false, 00:09:32.113 "seek_data": false, 00:09:32.113 "copy": false, 00:09:32.113 "nvme_iov_md": false 00:09:32.113 }, 00:09:32.113 "memory_domains": [ 00:09:32.113 { 00:09:32.113 "dma_device_id": "system", 00:09:32.113 "dma_device_type": 1 00:09:32.113 }, 00:09:32.113 { 00:09:32.113 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:32.113 "dma_device_type": 2 00:09:32.113 }, 00:09:32.113 { 00:09:32.113 "dma_device_id": "system", 00:09:32.113 "dma_device_type": 1 00:09:32.113 }, 00:09:32.113 { 00:09:32.113 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:32.113 "dma_device_type": 2 00:09:32.113 }, 00:09:32.113 { 00:09:32.113 "dma_device_id": "system", 00:09:32.113 "dma_device_type": 1 00:09:32.113 }, 00:09:32.113 { 00:09:32.113 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:32.113 "dma_device_type": 2 00:09:32.113 }, 00:09:32.113 { 00:09:32.113 "dma_device_id": "system", 00:09:32.113 "dma_device_type": 1 00:09:32.113 }, 00:09:32.113 { 00:09:32.113 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:32.113 "dma_device_type": 2 00:09:32.113 } 00:09:32.113 ], 00:09:32.113 "driver_specific": { 00:09:32.113 "raid": { 00:09:32.113 "uuid": "ba76fc49-9f42-47f2-8ced-84d2dd422386", 00:09:32.113 "strip_size_kb": 64, 00:09:32.113 "state": "online", 00:09:32.114 "raid_level": "raid0", 00:09:32.114 "superblock": false, 00:09:32.114 "num_base_bdevs": 4, 00:09:32.114 "num_base_bdevs_discovered": 4, 00:09:32.114 "num_base_bdevs_operational": 4, 00:09:32.114 "base_bdevs_list": [ 00:09:32.114 { 00:09:32.114 "name": "BaseBdev1", 00:09:32.114 "uuid": "5407c88c-f052-4bc1-b2f0-5ee8eedd8958", 00:09:32.114 "is_configured": true, 00:09:32.114 "data_offset": 0, 00:09:32.114 "data_size": 65536 00:09:32.114 }, 00:09:32.114 { 00:09:32.114 "name": "BaseBdev2", 00:09:32.114 "uuid": "076711be-7ca7-4a72-8f69-a6494ef6dd63", 00:09:32.114 "is_configured": true, 00:09:32.114 "data_offset": 0, 00:09:32.114 "data_size": 65536 00:09:32.114 }, 00:09:32.114 { 00:09:32.114 "name": "BaseBdev3", 00:09:32.114 "uuid": "52cb24ca-6424-4190-a692-d2178ebac9c3", 00:09:32.114 "is_configured": true, 00:09:32.114 "data_offset": 0, 00:09:32.114 "data_size": 65536 00:09:32.114 }, 00:09:32.114 { 00:09:32.114 "name": "BaseBdev4", 00:09:32.114 "uuid": "1776b099-ec70-49b2-92d3-3c1064561e1b", 00:09:32.114 "is_configured": true, 00:09:32.114 "data_offset": 0, 00:09:32.114 "data_size": 65536 00:09:32.114 } 00:09:32.114 ] 00:09:32.114 } 00:09:32.114 } 00:09:32.114 }' 00:09:32.114 23:27:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:32.114 23:27:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:09:32.114 BaseBdev2 00:09:32.114 BaseBdev3 00:09:32.114 BaseBdev4' 00:09:32.114 23:27:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:32.114 23:27:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:32.114 23:27:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:32.114 23:27:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:09:32.114 23:27:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:32.114 23:27:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:32.114 23:27:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.114 23:27:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:32.114 23:27:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:32.114 23:27:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:32.114 23:27:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:32.114 23:27:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:32.114 23:27:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:32.114 23:27:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:32.114 23:27:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.114 23:27:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:32.114 23:27:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:32.114 23:27:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:32.114 23:27:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:32.114 23:27:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:32.114 23:27:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:32.114 23:27:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:32.114 23:27:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.372 23:27:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:32.372 23:27:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:32.372 23:27:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:32.372 23:27:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:32.372 23:27:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:09:32.372 23:27:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:32.372 23:27:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:32.372 23:27:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.372 23:27:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:32.372 23:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:32.372 23:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:32.372 23:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:32.372 23:27:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:32.372 23:27:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.372 [2024-09-30 23:27:12.049391] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:32.372 [2024-09-30 23:27:12.049422] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:32.372 [2024-09-30 23:27:12.049482] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:32.372 23:27:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:32.372 23:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:09:32.372 23:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:09:32.372 23:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:32.372 23:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:32.372 23:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:09:32.372 23:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:09:32.372 23:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:32.372 23:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:09:32.373 23:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:32.373 23:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:32.373 23:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:32.373 23:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:32.373 23:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:32.373 23:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:32.373 23:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:32.373 23:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:32.373 23:27:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:32.373 23:27:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.373 23:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:32.373 23:27:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:32.373 23:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:32.373 "name": "Existed_Raid", 00:09:32.373 "uuid": "ba76fc49-9f42-47f2-8ced-84d2dd422386", 00:09:32.373 "strip_size_kb": 64, 00:09:32.373 "state": "offline", 00:09:32.373 "raid_level": "raid0", 00:09:32.373 "superblock": false, 00:09:32.373 "num_base_bdevs": 4, 00:09:32.373 "num_base_bdevs_discovered": 3, 00:09:32.373 "num_base_bdevs_operational": 3, 00:09:32.373 "base_bdevs_list": [ 00:09:32.373 { 00:09:32.373 "name": null, 00:09:32.373 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:32.373 "is_configured": false, 00:09:32.373 "data_offset": 0, 00:09:32.373 "data_size": 65536 00:09:32.373 }, 00:09:32.373 { 00:09:32.373 "name": "BaseBdev2", 00:09:32.373 "uuid": "076711be-7ca7-4a72-8f69-a6494ef6dd63", 00:09:32.373 "is_configured": true, 00:09:32.373 "data_offset": 0, 00:09:32.373 "data_size": 65536 00:09:32.373 }, 00:09:32.373 { 00:09:32.373 "name": "BaseBdev3", 00:09:32.373 "uuid": "52cb24ca-6424-4190-a692-d2178ebac9c3", 00:09:32.373 "is_configured": true, 00:09:32.373 "data_offset": 0, 00:09:32.373 "data_size": 65536 00:09:32.373 }, 00:09:32.373 { 00:09:32.373 "name": "BaseBdev4", 00:09:32.373 "uuid": "1776b099-ec70-49b2-92d3-3c1064561e1b", 00:09:32.373 "is_configured": true, 00:09:32.373 "data_offset": 0, 00:09:32.373 "data_size": 65536 00:09:32.373 } 00:09:32.373 ] 00:09:32.373 }' 00:09:32.373 23:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:32.373 23:27:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.938 23:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:09:32.938 23:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:32.938 23:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:32.938 23:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:32.938 23:27:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:32.938 23:27:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.938 23:27:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:32.938 23:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:32.938 23:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:32.938 23:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:09:32.938 23:27:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:32.938 23:27:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.938 [2024-09-30 23:27:12.571971] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:32.938 23:27:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:32.938 23:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:32.938 23:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:32.938 23:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:32.938 23:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:32.938 23:27:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:32.938 23:27:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.938 23:27:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:32.938 23:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:32.938 23:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:32.938 23:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:09:32.938 23:27:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:32.938 23:27:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.938 [2024-09-30 23:27:12.635145] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:32.938 23:27:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:32.938 23:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:32.938 23:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:32.938 23:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:32.938 23:27:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:32.939 23:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:32.939 23:27:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.939 23:27:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:32.939 23:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:32.939 23:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:32.939 23:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:09:32.939 23:27:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:32.939 23:27:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.939 [2024-09-30 23:27:12.706265] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:09:32.939 [2024-09-30 23:27:12.706357] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline 00:09:32.939 23:27:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:32.939 23:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:32.939 23:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:32.939 23:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:32.939 23:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:09:32.939 23:27:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:32.939 23:27:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.939 23:27:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:32.939 23:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:09:32.939 23:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:09:32.939 23:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:09:32.939 23:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:09:32.939 23:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:32.939 23:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:32.939 23:27:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:32.939 23:27:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.939 BaseBdev2 00:09:32.939 23:27:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:32.939 23:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:09:32.939 23:27:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:09:32.939 23:27:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:32.939 23:27:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:32.939 23:27:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:32.939 23:27:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:32.939 23:27:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:32.939 23:27:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:32.939 23:27:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.939 23:27:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:32.939 23:27:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:32.939 23:27:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:32.939 23:27:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.198 [ 00:09:33.198 { 00:09:33.198 "name": "BaseBdev2", 00:09:33.198 "aliases": [ 00:09:33.198 "ca1c6bf2-9ffe-4c3f-8f1d-7ebe303b30a5" 00:09:33.198 ], 00:09:33.198 "product_name": "Malloc disk", 00:09:33.198 "block_size": 512, 00:09:33.198 "num_blocks": 65536, 00:09:33.198 "uuid": "ca1c6bf2-9ffe-4c3f-8f1d-7ebe303b30a5", 00:09:33.198 "assigned_rate_limits": { 00:09:33.198 "rw_ios_per_sec": 0, 00:09:33.198 "rw_mbytes_per_sec": 0, 00:09:33.198 "r_mbytes_per_sec": 0, 00:09:33.198 "w_mbytes_per_sec": 0 00:09:33.198 }, 00:09:33.198 "claimed": false, 00:09:33.198 "zoned": false, 00:09:33.198 "supported_io_types": { 00:09:33.198 "read": true, 00:09:33.198 "write": true, 00:09:33.198 "unmap": true, 00:09:33.198 "flush": true, 00:09:33.198 "reset": true, 00:09:33.198 "nvme_admin": false, 00:09:33.198 "nvme_io": false, 00:09:33.198 "nvme_io_md": false, 00:09:33.198 "write_zeroes": true, 00:09:33.198 "zcopy": true, 00:09:33.198 "get_zone_info": false, 00:09:33.198 "zone_management": false, 00:09:33.198 "zone_append": false, 00:09:33.198 "compare": false, 00:09:33.198 "compare_and_write": false, 00:09:33.198 "abort": true, 00:09:33.198 "seek_hole": false, 00:09:33.198 "seek_data": false, 00:09:33.198 "copy": true, 00:09:33.198 "nvme_iov_md": false 00:09:33.198 }, 00:09:33.198 "memory_domains": [ 00:09:33.198 { 00:09:33.198 "dma_device_id": "system", 00:09:33.198 "dma_device_type": 1 00:09:33.198 }, 00:09:33.198 { 00:09:33.198 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:33.198 "dma_device_type": 2 00:09:33.198 } 00:09:33.198 ], 00:09:33.198 "driver_specific": {} 00:09:33.198 } 00:09:33.198 ] 00:09:33.198 23:27:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:33.198 23:27:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:33.198 23:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:33.198 23:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:33.198 23:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:33.198 23:27:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:33.198 23:27:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.198 BaseBdev3 00:09:33.198 23:27:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:33.198 23:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:09:33.198 23:27:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:09:33.198 23:27:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:33.198 23:27:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:33.198 23:27:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:33.198 23:27:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:33.198 23:27:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:33.198 23:27:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:33.198 23:27:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.198 23:27:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:33.198 23:27:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:33.198 23:27:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:33.198 23:27:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.198 [ 00:09:33.198 { 00:09:33.198 "name": "BaseBdev3", 00:09:33.198 "aliases": [ 00:09:33.198 "b2fbb8bd-62b5-4915-8c89-de1e3aa4c8ac" 00:09:33.198 ], 00:09:33.198 "product_name": "Malloc disk", 00:09:33.198 "block_size": 512, 00:09:33.198 "num_blocks": 65536, 00:09:33.198 "uuid": "b2fbb8bd-62b5-4915-8c89-de1e3aa4c8ac", 00:09:33.198 "assigned_rate_limits": { 00:09:33.198 "rw_ios_per_sec": 0, 00:09:33.198 "rw_mbytes_per_sec": 0, 00:09:33.198 "r_mbytes_per_sec": 0, 00:09:33.198 "w_mbytes_per_sec": 0 00:09:33.198 }, 00:09:33.198 "claimed": false, 00:09:33.198 "zoned": false, 00:09:33.198 "supported_io_types": { 00:09:33.198 "read": true, 00:09:33.198 "write": true, 00:09:33.198 "unmap": true, 00:09:33.198 "flush": true, 00:09:33.198 "reset": true, 00:09:33.198 "nvme_admin": false, 00:09:33.198 "nvme_io": false, 00:09:33.198 "nvme_io_md": false, 00:09:33.198 "write_zeroes": true, 00:09:33.198 "zcopy": true, 00:09:33.198 "get_zone_info": false, 00:09:33.198 "zone_management": false, 00:09:33.198 "zone_append": false, 00:09:33.198 "compare": false, 00:09:33.198 "compare_and_write": false, 00:09:33.198 "abort": true, 00:09:33.198 "seek_hole": false, 00:09:33.198 "seek_data": false, 00:09:33.198 "copy": true, 00:09:33.198 "nvme_iov_md": false 00:09:33.198 }, 00:09:33.198 "memory_domains": [ 00:09:33.198 { 00:09:33.198 "dma_device_id": "system", 00:09:33.198 "dma_device_type": 1 00:09:33.198 }, 00:09:33.198 { 00:09:33.198 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:33.198 "dma_device_type": 2 00:09:33.198 } 00:09:33.198 ], 00:09:33.198 "driver_specific": {} 00:09:33.198 } 00:09:33.198 ] 00:09:33.198 23:27:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:33.198 23:27:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:33.198 23:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:33.198 23:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:33.199 23:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:09:33.199 23:27:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:33.199 23:27:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.199 BaseBdev4 00:09:33.199 23:27:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:33.199 23:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:09:33.199 23:27:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:09:33.199 23:27:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:33.199 23:27:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:33.199 23:27:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:33.199 23:27:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:33.199 23:27:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:33.199 23:27:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:33.199 23:27:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.199 23:27:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:33.199 23:27:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:09:33.199 23:27:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:33.199 23:27:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.199 [ 00:09:33.199 { 00:09:33.199 "name": "BaseBdev4", 00:09:33.199 "aliases": [ 00:09:33.199 "01366e2a-8a5e-4651-ae26-69cca7a83be0" 00:09:33.199 ], 00:09:33.199 "product_name": "Malloc disk", 00:09:33.199 "block_size": 512, 00:09:33.199 "num_blocks": 65536, 00:09:33.199 "uuid": "01366e2a-8a5e-4651-ae26-69cca7a83be0", 00:09:33.199 "assigned_rate_limits": { 00:09:33.199 "rw_ios_per_sec": 0, 00:09:33.199 "rw_mbytes_per_sec": 0, 00:09:33.199 "r_mbytes_per_sec": 0, 00:09:33.199 "w_mbytes_per_sec": 0 00:09:33.199 }, 00:09:33.199 "claimed": false, 00:09:33.199 "zoned": false, 00:09:33.199 "supported_io_types": { 00:09:33.199 "read": true, 00:09:33.199 "write": true, 00:09:33.199 "unmap": true, 00:09:33.199 "flush": true, 00:09:33.199 "reset": true, 00:09:33.199 "nvme_admin": false, 00:09:33.199 "nvme_io": false, 00:09:33.199 "nvme_io_md": false, 00:09:33.199 "write_zeroes": true, 00:09:33.199 "zcopy": true, 00:09:33.199 "get_zone_info": false, 00:09:33.199 "zone_management": false, 00:09:33.199 "zone_append": false, 00:09:33.199 "compare": false, 00:09:33.199 "compare_and_write": false, 00:09:33.199 "abort": true, 00:09:33.199 "seek_hole": false, 00:09:33.199 "seek_data": false, 00:09:33.199 "copy": true, 00:09:33.199 "nvme_iov_md": false 00:09:33.199 }, 00:09:33.199 "memory_domains": [ 00:09:33.199 { 00:09:33.199 "dma_device_id": "system", 00:09:33.199 "dma_device_type": 1 00:09:33.199 }, 00:09:33.199 { 00:09:33.199 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:33.199 "dma_device_type": 2 00:09:33.199 } 00:09:33.199 ], 00:09:33.199 "driver_specific": {} 00:09:33.199 } 00:09:33.199 ] 00:09:33.199 23:27:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:33.199 23:27:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:33.199 23:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:33.199 23:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:33.199 23:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:09:33.199 23:27:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:33.199 23:27:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.199 [2024-09-30 23:27:12.921856] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:33.199 [2024-09-30 23:27:12.921953] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:33.199 [2024-09-30 23:27:12.921993] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:33.199 [2024-09-30 23:27:12.923899] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:33.199 [2024-09-30 23:27:12.923992] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:09:33.199 23:27:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:33.199 23:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:33.199 23:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:33.199 23:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:33.199 23:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:33.199 23:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:33.199 23:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:33.199 23:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:33.199 23:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:33.199 23:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:33.199 23:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:33.199 23:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:33.199 23:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:33.199 23:27:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:33.199 23:27:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.199 23:27:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:33.199 23:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:33.199 "name": "Existed_Raid", 00:09:33.199 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:33.199 "strip_size_kb": 64, 00:09:33.199 "state": "configuring", 00:09:33.199 "raid_level": "raid0", 00:09:33.199 "superblock": false, 00:09:33.199 "num_base_bdevs": 4, 00:09:33.199 "num_base_bdevs_discovered": 3, 00:09:33.199 "num_base_bdevs_operational": 4, 00:09:33.199 "base_bdevs_list": [ 00:09:33.199 { 00:09:33.199 "name": "BaseBdev1", 00:09:33.199 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:33.199 "is_configured": false, 00:09:33.199 "data_offset": 0, 00:09:33.199 "data_size": 0 00:09:33.199 }, 00:09:33.199 { 00:09:33.199 "name": "BaseBdev2", 00:09:33.199 "uuid": "ca1c6bf2-9ffe-4c3f-8f1d-7ebe303b30a5", 00:09:33.199 "is_configured": true, 00:09:33.199 "data_offset": 0, 00:09:33.199 "data_size": 65536 00:09:33.199 }, 00:09:33.199 { 00:09:33.199 "name": "BaseBdev3", 00:09:33.199 "uuid": "b2fbb8bd-62b5-4915-8c89-de1e3aa4c8ac", 00:09:33.199 "is_configured": true, 00:09:33.199 "data_offset": 0, 00:09:33.199 "data_size": 65536 00:09:33.199 }, 00:09:33.199 { 00:09:33.199 "name": "BaseBdev4", 00:09:33.199 "uuid": "01366e2a-8a5e-4651-ae26-69cca7a83be0", 00:09:33.199 "is_configured": true, 00:09:33.199 "data_offset": 0, 00:09:33.199 "data_size": 65536 00:09:33.199 } 00:09:33.199 ] 00:09:33.199 }' 00:09:33.199 23:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:33.199 23:27:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.766 23:27:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:09:33.766 23:27:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:33.766 23:27:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.766 [2024-09-30 23:27:13.365065] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:33.766 23:27:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:33.766 23:27:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:33.766 23:27:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:33.766 23:27:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:33.766 23:27:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:33.766 23:27:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:33.766 23:27:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:33.766 23:27:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:33.766 23:27:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:33.766 23:27:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:33.766 23:27:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:33.766 23:27:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:33.766 23:27:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:33.766 23:27:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:33.766 23:27:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.766 23:27:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:33.766 23:27:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:33.766 "name": "Existed_Raid", 00:09:33.766 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:33.766 "strip_size_kb": 64, 00:09:33.766 "state": "configuring", 00:09:33.766 "raid_level": "raid0", 00:09:33.766 "superblock": false, 00:09:33.766 "num_base_bdevs": 4, 00:09:33.766 "num_base_bdevs_discovered": 2, 00:09:33.766 "num_base_bdevs_operational": 4, 00:09:33.766 "base_bdevs_list": [ 00:09:33.766 { 00:09:33.766 "name": "BaseBdev1", 00:09:33.766 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:33.766 "is_configured": false, 00:09:33.766 "data_offset": 0, 00:09:33.766 "data_size": 0 00:09:33.766 }, 00:09:33.766 { 00:09:33.766 "name": null, 00:09:33.766 "uuid": "ca1c6bf2-9ffe-4c3f-8f1d-7ebe303b30a5", 00:09:33.766 "is_configured": false, 00:09:33.766 "data_offset": 0, 00:09:33.766 "data_size": 65536 00:09:33.766 }, 00:09:33.766 { 00:09:33.766 "name": "BaseBdev3", 00:09:33.766 "uuid": "b2fbb8bd-62b5-4915-8c89-de1e3aa4c8ac", 00:09:33.766 "is_configured": true, 00:09:33.766 "data_offset": 0, 00:09:33.766 "data_size": 65536 00:09:33.766 }, 00:09:33.766 { 00:09:33.766 "name": "BaseBdev4", 00:09:33.766 "uuid": "01366e2a-8a5e-4651-ae26-69cca7a83be0", 00:09:33.766 "is_configured": true, 00:09:33.766 "data_offset": 0, 00:09:33.766 "data_size": 65536 00:09:33.766 } 00:09:33.766 ] 00:09:33.766 }' 00:09:33.766 23:27:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:33.766 23:27:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.023 23:27:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:34.023 23:27:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:34.024 23:27:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.024 23:27:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.024 23:27:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.024 23:27:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:09:34.024 23:27:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:34.024 23:27:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.024 23:27:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.024 [2024-09-30 23:27:13.843294] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:34.024 BaseBdev1 00:09:34.024 23:27:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.024 23:27:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:09:34.024 23:27:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:09:34.024 23:27:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:34.024 23:27:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:34.024 23:27:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:34.024 23:27:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:34.024 23:27:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:34.024 23:27:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.024 23:27:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.024 23:27:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.024 23:27:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:34.024 23:27:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.024 23:27:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.024 [ 00:09:34.024 { 00:09:34.024 "name": "BaseBdev1", 00:09:34.024 "aliases": [ 00:09:34.024 "cb06ee49-eb31-4a07-bb30-5851a46c149d" 00:09:34.024 ], 00:09:34.024 "product_name": "Malloc disk", 00:09:34.024 "block_size": 512, 00:09:34.024 "num_blocks": 65536, 00:09:34.024 "uuid": "cb06ee49-eb31-4a07-bb30-5851a46c149d", 00:09:34.024 "assigned_rate_limits": { 00:09:34.024 "rw_ios_per_sec": 0, 00:09:34.024 "rw_mbytes_per_sec": 0, 00:09:34.024 "r_mbytes_per_sec": 0, 00:09:34.024 "w_mbytes_per_sec": 0 00:09:34.024 }, 00:09:34.024 "claimed": true, 00:09:34.024 "claim_type": "exclusive_write", 00:09:34.024 "zoned": false, 00:09:34.024 "supported_io_types": { 00:09:34.024 "read": true, 00:09:34.024 "write": true, 00:09:34.024 "unmap": true, 00:09:34.024 "flush": true, 00:09:34.024 "reset": true, 00:09:34.282 "nvme_admin": false, 00:09:34.282 "nvme_io": false, 00:09:34.282 "nvme_io_md": false, 00:09:34.282 "write_zeroes": true, 00:09:34.282 "zcopy": true, 00:09:34.282 "get_zone_info": false, 00:09:34.282 "zone_management": false, 00:09:34.282 "zone_append": false, 00:09:34.282 "compare": false, 00:09:34.282 "compare_and_write": false, 00:09:34.282 "abort": true, 00:09:34.282 "seek_hole": false, 00:09:34.282 "seek_data": false, 00:09:34.282 "copy": true, 00:09:34.282 "nvme_iov_md": false 00:09:34.282 }, 00:09:34.282 "memory_domains": [ 00:09:34.282 { 00:09:34.282 "dma_device_id": "system", 00:09:34.282 "dma_device_type": 1 00:09:34.282 }, 00:09:34.282 { 00:09:34.282 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:34.282 "dma_device_type": 2 00:09:34.282 } 00:09:34.282 ], 00:09:34.282 "driver_specific": {} 00:09:34.282 } 00:09:34.282 ] 00:09:34.282 23:27:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.282 23:27:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:34.282 23:27:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:34.282 23:27:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:34.282 23:27:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:34.282 23:27:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:34.282 23:27:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:34.282 23:27:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:34.282 23:27:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:34.282 23:27:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:34.282 23:27:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:34.282 23:27:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:34.282 23:27:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:34.282 23:27:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:34.282 23:27:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.282 23:27:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.282 23:27:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.282 23:27:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:34.282 "name": "Existed_Raid", 00:09:34.282 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:34.282 "strip_size_kb": 64, 00:09:34.282 "state": "configuring", 00:09:34.282 "raid_level": "raid0", 00:09:34.282 "superblock": false, 00:09:34.282 "num_base_bdevs": 4, 00:09:34.282 "num_base_bdevs_discovered": 3, 00:09:34.282 "num_base_bdevs_operational": 4, 00:09:34.282 "base_bdevs_list": [ 00:09:34.282 { 00:09:34.282 "name": "BaseBdev1", 00:09:34.282 "uuid": "cb06ee49-eb31-4a07-bb30-5851a46c149d", 00:09:34.282 "is_configured": true, 00:09:34.282 "data_offset": 0, 00:09:34.282 "data_size": 65536 00:09:34.282 }, 00:09:34.282 { 00:09:34.282 "name": null, 00:09:34.282 "uuid": "ca1c6bf2-9ffe-4c3f-8f1d-7ebe303b30a5", 00:09:34.282 "is_configured": false, 00:09:34.282 "data_offset": 0, 00:09:34.282 "data_size": 65536 00:09:34.282 }, 00:09:34.282 { 00:09:34.282 "name": "BaseBdev3", 00:09:34.282 "uuid": "b2fbb8bd-62b5-4915-8c89-de1e3aa4c8ac", 00:09:34.282 "is_configured": true, 00:09:34.282 "data_offset": 0, 00:09:34.282 "data_size": 65536 00:09:34.282 }, 00:09:34.282 { 00:09:34.282 "name": "BaseBdev4", 00:09:34.282 "uuid": "01366e2a-8a5e-4651-ae26-69cca7a83be0", 00:09:34.282 "is_configured": true, 00:09:34.282 "data_offset": 0, 00:09:34.282 "data_size": 65536 00:09:34.282 } 00:09:34.282 ] 00:09:34.282 }' 00:09:34.283 23:27:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:34.283 23:27:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.542 23:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:34.542 23:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:34.542 23:27:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.542 23:27:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.542 23:27:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.801 23:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:09:34.801 23:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:09:34.801 23:27:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.801 23:27:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.801 [2024-09-30 23:27:14.402385] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:34.801 23:27:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.801 23:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:34.801 23:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:34.801 23:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:34.801 23:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:34.801 23:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:34.801 23:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:34.801 23:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:34.801 23:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:34.801 23:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:34.801 23:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:34.801 23:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:34.801 23:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:34.801 23:27:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.801 23:27:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.801 23:27:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.801 23:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:34.801 "name": "Existed_Raid", 00:09:34.801 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:34.801 "strip_size_kb": 64, 00:09:34.801 "state": "configuring", 00:09:34.801 "raid_level": "raid0", 00:09:34.801 "superblock": false, 00:09:34.801 "num_base_bdevs": 4, 00:09:34.801 "num_base_bdevs_discovered": 2, 00:09:34.801 "num_base_bdevs_operational": 4, 00:09:34.801 "base_bdevs_list": [ 00:09:34.801 { 00:09:34.801 "name": "BaseBdev1", 00:09:34.801 "uuid": "cb06ee49-eb31-4a07-bb30-5851a46c149d", 00:09:34.801 "is_configured": true, 00:09:34.801 "data_offset": 0, 00:09:34.801 "data_size": 65536 00:09:34.801 }, 00:09:34.801 { 00:09:34.801 "name": null, 00:09:34.801 "uuid": "ca1c6bf2-9ffe-4c3f-8f1d-7ebe303b30a5", 00:09:34.801 "is_configured": false, 00:09:34.801 "data_offset": 0, 00:09:34.801 "data_size": 65536 00:09:34.801 }, 00:09:34.801 { 00:09:34.801 "name": null, 00:09:34.801 "uuid": "b2fbb8bd-62b5-4915-8c89-de1e3aa4c8ac", 00:09:34.801 "is_configured": false, 00:09:34.801 "data_offset": 0, 00:09:34.801 "data_size": 65536 00:09:34.801 }, 00:09:34.801 { 00:09:34.801 "name": "BaseBdev4", 00:09:34.801 "uuid": "01366e2a-8a5e-4651-ae26-69cca7a83be0", 00:09:34.801 "is_configured": true, 00:09:34.801 "data_offset": 0, 00:09:34.801 "data_size": 65536 00:09:34.801 } 00:09:34.802 ] 00:09:34.802 }' 00:09:34.802 23:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:34.802 23:27:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.060 23:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:35.060 23:27:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.060 23:27:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.060 23:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:35.060 23:27:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.060 23:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:09:35.060 23:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:09:35.060 23:27:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.060 23:27:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.060 [2024-09-30 23:27:14.861636] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:35.060 23:27:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.060 23:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:35.060 23:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:35.060 23:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:35.060 23:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:35.060 23:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:35.060 23:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:35.060 23:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:35.060 23:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:35.060 23:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:35.060 23:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:35.060 23:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:35.060 23:27:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.060 23:27:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.060 23:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:35.060 23:27:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.319 23:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:35.319 "name": "Existed_Raid", 00:09:35.319 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:35.319 "strip_size_kb": 64, 00:09:35.319 "state": "configuring", 00:09:35.319 "raid_level": "raid0", 00:09:35.319 "superblock": false, 00:09:35.319 "num_base_bdevs": 4, 00:09:35.319 "num_base_bdevs_discovered": 3, 00:09:35.319 "num_base_bdevs_operational": 4, 00:09:35.319 "base_bdevs_list": [ 00:09:35.319 { 00:09:35.319 "name": "BaseBdev1", 00:09:35.319 "uuid": "cb06ee49-eb31-4a07-bb30-5851a46c149d", 00:09:35.319 "is_configured": true, 00:09:35.319 "data_offset": 0, 00:09:35.319 "data_size": 65536 00:09:35.319 }, 00:09:35.319 { 00:09:35.319 "name": null, 00:09:35.319 "uuid": "ca1c6bf2-9ffe-4c3f-8f1d-7ebe303b30a5", 00:09:35.319 "is_configured": false, 00:09:35.319 "data_offset": 0, 00:09:35.319 "data_size": 65536 00:09:35.319 }, 00:09:35.319 { 00:09:35.319 "name": "BaseBdev3", 00:09:35.319 "uuid": "b2fbb8bd-62b5-4915-8c89-de1e3aa4c8ac", 00:09:35.319 "is_configured": true, 00:09:35.319 "data_offset": 0, 00:09:35.319 "data_size": 65536 00:09:35.319 }, 00:09:35.319 { 00:09:35.319 "name": "BaseBdev4", 00:09:35.319 "uuid": "01366e2a-8a5e-4651-ae26-69cca7a83be0", 00:09:35.319 "is_configured": true, 00:09:35.319 "data_offset": 0, 00:09:35.319 "data_size": 65536 00:09:35.319 } 00:09:35.319 ] 00:09:35.319 }' 00:09:35.319 23:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:35.319 23:27:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.578 23:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:35.578 23:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:35.578 23:27:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.578 23:27:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.578 23:27:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.578 23:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:09:35.578 23:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:35.578 23:27:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.578 23:27:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.578 [2024-09-30 23:27:15.328883] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:35.579 23:27:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.579 23:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:35.579 23:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:35.579 23:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:35.579 23:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:35.579 23:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:35.579 23:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:35.579 23:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:35.579 23:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:35.579 23:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:35.579 23:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:35.579 23:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:35.579 23:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:35.579 23:27:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.579 23:27:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.579 23:27:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.579 23:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:35.579 "name": "Existed_Raid", 00:09:35.579 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:35.579 "strip_size_kb": 64, 00:09:35.579 "state": "configuring", 00:09:35.579 "raid_level": "raid0", 00:09:35.579 "superblock": false, 00:09:35.579 "num_base_bdevs": 4, 00:09:35.579 "num_base_bdevs_discovered": 2, 00:09:35.579 "num_base_bdevs_operational": 4, 00:09:35.579 "base_bdevs_list": [ 00:09:35.579 { 00:09:35.579 "name": null, 00:09:35.579 "uuid": "cb06ee49-eb31-4a07-bb30-5851a46c149d", 00:09:35.579 "is_configured": false, 00:09:35.579 "data_offset": 0, 00:09:35.579 "data_size": 65536 00:09:35.579 }, 00:09:35.579 { 00:09:35.579 "name": null, 00:09:35.579 "uuid": "ca1c6bf2-9ffe-4c3f-8f1d-7ebe303b30a5", 00:09:35.579 "is_configured": false, 00:09:35.579 "data_offset": 0, 00:09:35.579 "data_size": 65536 00:09:35.579 }, 00:09:35.579 { 00:09:35.579 "name": "BaseBdev3", 00:09:35.579 "uuid": "b2fbb8bd-62b5-4915-8c89-de1e3aa4c8ac", 00:09:35.579 "is_configured": true, 00:09:35.579 "data_offset": 0, 00:09:35.579 "data_size": 65536 00:09:35.579 }, 00:09:35.579 { 00:09:35.579 "name": "BaseBdev4", 00:09:35.579 "uuid": "01366e2a-8a5e-4651-ae26-69cca7a83be0", 00:09:35.579 "is_configured": true, 00:09:35.579 "data_offset": 0, 00:09:35.579 "data_size": 65536 00:09:35.579 } 00:09:35.579 ] 00:09:35.579 }' 00:09:35.579 23:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:35.579 23:27:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.146 23:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:36.146 23:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:36.146 23:27:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:36.146 23:27:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.146 23:27:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:36.146 23:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:09:36.146 23:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:09:36.146 23:27:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:36.146 23:27:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.146 [2024-09-30 23:27:15.814622] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:36.146 23:27:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:36.146 23:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:36.146 23:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:36.146 23:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:36.146 23:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:36.146 23:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:36.146 23:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:36.146 23:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:36.146 23:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:36.146 23:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:36.146 23:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:36.146 23:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:36.146 23:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:36.146 23:27:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:36.146 23:27:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.146 23:27:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:36.146 23:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:36.146 "name": "Existed_Raid", 00:09:36.146 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:36.146 "strip_size_kb": 64, 00:09:36.146 "state": "configuring", 00:09:36.146 "raid_level": "raid0", 00:09:36.146 "superblock": false, 00:09:36.146 "num_base_bdevs": 4, 00:09:36.146 "num_base_bdevs_discovered": 3, 00:09:36.146 "num_base_bdevs_operational": 4, 00:09:36.146 "base_bdevs_list": [ 00:09:36.146 { 00:09:36.146 "name": null, 00:09:36.146 "uuid": "cb06ee49-eb31-4a07-bb30-5851a46c149d", 00:09:36.146 "is_configured": false, 00:09:36.146 "data_offset": 0, 00:09:36.146 "data_size": 65536 00:09:36.146 }, 00:09:36.146 { 00:09:36.146 "name": "BaseBdev2", 00:09:36.146 "uuid": "ca1c6bf2-9ffe-4c3f-8f1d-7ebe303b30a5", 00:09:36.146 "is_configured": true, 00:09:36.146 "data_offset": 0, 00:09:36.146 "data_size": 65536 00:09:36.146 }, 00:09:36.146 { 00:09:36.146 "name": "BaseBdev3", 00:09:36.146 "uuid": "b2fbb8bd-62b5-4915-8c89-de1e3aa4c8ac", 00:09:36.146 "is_configured": true, 00:09:36.146 "data_offset": 0, 00:09:36.146 "data_size": 65536 00:09:36.146 }, 00:09:36.146 { 00:09:36.146 "name": "BaseBdev4", 00:09:36.146 "uuid": "01366e2a-8a5e-4651-ae26-69cca7a83be0", 00:09:36.146 "is_configured": true, 00:09:36.146 "data_offset": 0, 00:09:36.146 "data_size": 65536 00:09:36.146 } 00:09:36.146 ] 00:09:36.146 }' 00:09:36.146 23:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:36.146 23:27:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.713 23:27:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:36.713 23:27:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:36.713 23:27:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.713 23:27:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:36.713 23:27:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:36.713 23:27:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:09:36.713 23:27:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:09:36.713 23:27:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:36.713 23:27:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:36.713 23:27:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.713 23:27:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:36.713 23:27:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u cb06ee49-eb31-4a07-bb30-5851a46c149d 00:09:36.713 23:27:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:36.713 23:27:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.713 [2024-09-30 23:27:16.356532] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:09:36.713 [2024-09-30 23:27:16.356630] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:09:36.713 [2024-09-30 23:27:16.356655] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:09:36.713 [2024-09-30 23:27:16.356958] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:09:36.713 [2024-09-30 23:27:16.357118] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:09:36.713 [2024-09-30 23:27:16.357136] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006d00 00:09:36.713 [2024-09-30 23:27:16.357301] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:36.713 NewBaseBdev 00:09:36.713 23:27:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:36.713 23:27:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:09:36.713 23:27:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:09:36.713 23:27:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:36.713 23:27:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:36.713 23:27:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:36.713 23:27:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:36.713 23:27:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:36.713 23:27:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:36.713 23:27:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.713 23:27:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:36.713 23:27:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:09:36.713 23:27:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:36.713 23:27:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.713 [ 00:09:36.713 { 00:09:36.713 "name": "NewBaseBdev", 00:09:36.713 "aliases": [ 00:09:36.713 "cb06ee49-eb31-4a07-bb30-5851a46c149d" 00:09:36.713 ], 00:09:36.713 "product_name": "Malloc disk", 00:09:36.713 "block_size": 512, 00:09:36.713 "num_blocks": 65536, 00:09:36.713 "uuid": "cb06ee49-eb31-4a07-bb30-5851a46c149d", 00:09:36.713 "assigned_rate_limits": { 00:09:36.713 "rw_ios_per_sec": 0, 00:09:36.713 "rw_mbytes_per_sec": 0, 00:09:36.713 "r_mbytes_per_sec": 0, 00:09:36.713 "w_mbytes_per_sec": 0 00:09:36.713 }, 00:09:36.713 "claimed": true, 00:09:36.713 "claim_type": "exclusive_write", 00:09:36.713 "zoned": false, 00:09:36.713 "supported_io_types": { 00:09:36.713 "read": true, 00:09:36.713 "write": true, 00:09:36.713 "unmap": true, 00:09:36.713 "flush": true, 00:09:36.713 "reset": true, 00:09:36.713 "nvme_admin": false, 00:09:36.713 "nvme_io": false, 00:09:36.713 "nvme_io_md": false, 00:09:36.713 "write_zeroes": true, 00:09:36.713 "zcopy": true, 00:09:36.713 "get_zone_info": false, 00:09:36.713 "zone_management": false, 00:09:36.713 "zone_append": false, 00:09:36.713 "compare": false, 00:09:36.713 "compare_and_write": false, 00:09:36.714 "abort": true, 00:09:36.714 "seek_hole": false, 00:09:36.714 "seek_data": false, 00:09:36.714 "copy": true, 00:09:36.714 "nvme_iov_md": false 00:09:36.714 }, 00:09:36.714 "memory_domains": [ 00:09:36.714 { 00:09:36.714 "dma_device_id": "system", 00:09:36.714 "dma_device_type": 1 00:09:36.714 }, 00:09:36.714 { 00:09:36.714 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:36.714 "dma_device_type": 2 00:09:36.714 } 00:09:36.714 ], 00:09:36.714 "driver_specific": {} 00:09:36.714 } 00:09:36.714 ] 00:09:36.714 23:27:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:36.714 23:27:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:36.714 23:27:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:09:36.714 23:27:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:36.714 23:27:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:36.714 23:27:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:36.714 23:27:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:36.714 23:27:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:36.714 23:27:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:36.714 23:27:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:36.714 23:27:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:36.714 23:27:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:36.714 23:27:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:36.714 23:27:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:36.714 23:27:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:36.714 23:27:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.714 23:27:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:36.714 23:27:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:36.714 "name": "Existed_Raid", 00:09:36.714 "uuid": "1160a832-679d-41d4-915f-a2be8d4ffc8b", 00:09:36.714 "strip_size_kb": 64, 00:09:36.714 "state": "online", 00:09:36.714 "raid_level": "raid0", 00:09:36.714 "superblock": false, 00:09:36.714 "num_base_bdevs": 4, 00:09:36.714 "num_base_bdevs_discovered": 4, 00:09:36.714 "num_base_bdevs_operational": 4, 00:09:36.714 "base_bdevs_list": [ 00:09:36.714 { 00:09:36.714 "name": "NewBaseBdev", 00:09:36.714 "uuid": "cb06ee49-eb31-4a07-bb30-5851a46c149d", 00:09:36.714 "is_configured": true, 00:09:36.714 "data_offset": 0, 00:09:36.714 "data_size": 65536 00:09:36.714 }, 00:09:36.714 { 00:09:36.714 "name": "BaseBdev2", 00:09:36.714 "uuid": "ca1c6bf2-9ffe-4c3f-8f1d-7ebe303b30a5", 00:09:36.714 "is_configured": true, 00:09:36.714 "data_offset": 0, 00:09:36.714 "data_size": 65536 00:09:36.714 }, 00:09:36.714 { 00:09:36.714 "name": "BaseBdev3", 00:09:36.714 "uuid": "b2fbb8bd-62b5-4915-8c89-de1e3aa4c8ac", 00:09:36.714 "is_configured": true, 00:09:36.714 "data_offset": 0, 00:09:36.714 "data_size": 65536 00:09:36.714 }, 00:09:36.714 { 00:09:36.714 "name": "BaseBdev4", 00:09:36.714 "uuid": "01366e2a-8a5e-4651-ae26-69cca7a83be0", 00:09:36.714 "is_configured": true, 00:09:36.714 "data_offset": 0, 00:09:36.714 "data_size": 65536 00:09:36.714 } 00:09:36.714 ] 00:09:36.714 }' 00:09:36.714 23:27:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:36.714 23:27:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.281 23:27:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:09:37.281 23:27:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:37.281 23:27:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:37.281 23:27:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:37.281 23:27:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:37.281 23:27:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:37.281 23:27:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:37.281 23:27:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:37.281 23:27:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.281 23:27:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:37.281 [2024-09-30 23:27:16.860021] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:37.281 23:27:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:37.281 23:27:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:37.281 "name": "Existed_Raid", 00:09:37.281 "aliases": [ 00:09:37.281 "1160a832-679d-41d4-915f-a2be8d4ffc8b" 00:09:37.281 ], 00:09:37.281 "product_name": "Raid Volume", 00:09:37.281 "block_size": 512, 00:09:37.281 "num_blocks": 262144, 00:09:37.281 "uuid": "1160a832-679d-41d4-915f-a2be8d4ffc8b", 00:09:37.281 "assigned_rate_limits": { 00:09:37.281 "rw_ios_per_sec": 0, 00:09:37.281 "rw_mbytes_per_sec": 0, 00:09:37.281 "r_mbytes_per_sec": 0, 00:09:37.281 "w_mbytes_per_sec": 0 00:09:37.281 }, 00:09:37.281 "claimed": false, 00:09:37.281 "zoned": false, 00:09:37.281 "supported_io_types": { 00:09:37.281 "read": true, 00:09:37.281 "write": true, 00:09:37.281 "unmap": true, 00:09:37.281 "flush": true, 00:09:37.281 "reset": true, 00:09:37.281 "nvme_admin": false, 00:09:37.281 "nvme_io": false, 00:09:37.281 "nvme_io_md": false, 00:09:37.281 "write_zeroes": true, 00:09:37.281 "zcopy": false, 00:09:37.281 "get_zone_info": false, 00:09:37.281 "zone_management": false, 00:09:37.281 "zone_append": false, 00:09:37.281 "compare": false, 00:09:37.281 "compare_and_write": false, 00:09:37.281 "abort": false, 00:09:37.281 "seek_hole": false, 00:09:37.281 "seek_data": false, 00:09:37.281 "copy": false, 00:09:37.281 "nvme_iov_md": false 00:09:37.281 }, 00:09:37.281 "memory_domains": [ 00:09:37.281 { 00:09:37.281 "dma_device_id": "system", 00:09:37.281 "dma_device_type": 1 00:09:37.281 }, 00:09:37.281 { 00:09:37.281 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:37.281 "dma_device_type": 2 00:09:37.281 }, 00:09:37.281 { 00:09:37.281 "dma_device_id": "system", 00:09:37.281 "dma_device_type": 1 00:09:37.281 }, 00:09:37.281 { 00:09:37.281 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:37.281 "dma_device_type": 2 00:09:37.281 }, 00:09:37.281 { 00:09:37.281 "dma_device_id": "system", 00:09:37.281 "dma_device_type": 1 00:09:37.281 }, 00:09:37.281 { 00:09:37.281 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:37.281 "dma_device_type": 2 00:09:37.281 }, 00:09:37.281 { 00:09:37.281 "dma_device_id": "system", 00:09:37.281 "dma_device_type": 1 00:09:37.281 }, 00:09:37.281 { 00:09:37.281 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:37.281 "dma_device_type": 2 00:09:37.281 } 00:09:37.281 ], 00:09:37.281 "driver_specific": { 00:09:37.281 "raid": { 00:09:37.281 "uuid": "1160a832-679d-41d4-915f-a2be8d4ffc8b", 00:09:37.281 "strip_size_kb": 64, 00:09:37.281 "state": "online", 00:09:37.281 "raid_level": "raid0", 00:09:37.281 "superblock": false, 00:09:37.281 "num_base_bdevs": 4, 00:09:37.281 "num_base_bdevs_discovered": 4, 00:09:37.281 "num_base_bdevs_operational": 4, 00:09:37.281 "base_bdevs_list": [ 00:09:37.281 { 00:09:37.281 "name": "NewBaseBdev", 00:09:37.281 "uuid": "cb06ee49-eb31-4a07-bb30-5851a46c149d", 00:09:37.281 "is_configured": true, 00:09:37.281 "data_offset": 0, 00:09:37.281 "data_size": 65536 00:09:37.281 }, 00:09:37.281 { 00:09:37.281 "name": "BaseBdev2", 00:09:37.281 "uuid": "ca1c6bf2-9ffe-4c3f-8f1d-7ebe303b30a5", 00:09:37.281 "is_configured": true, 00:09:37.281 "data_offset": 0, 00:09:37.281 "data_size": 65536 00:09:37.281 }, 00:09:37.281 { 00:09:37.281 "name": "BaseBdev3", 00:09:37.281 "uuid": "b2fbb8bd-62b5-4915-8c89-de1e3aa4c8ac", 00:09:37.281 "is_configured": true, 00:09:37.281 "data_offset": 0, 00:09:37.281 "data_size": 65536 00:09:37.281 }, 00:09:37.281 { 00:09:37.281 "name": "BaseBdev4", 00:09:37.281 "uuid": "01366e2a-8a5e-4651-ae26-69cca7a83be0", 00:09:37.281 "is_configured": true, 00:09:37.281 "data_offset": 0, 00:09:37.281 "data_size": 65536 00:09:37.281 } 00:09:37.281 ] 00:09:37.281 } 00:09:37.281 } 00:09:37.281 }' 00:09:37.281 23:27:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:37.281 23:27:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:09:37.281 BaseBdev2 00:09:37.281 BaseBdev3 00:09:37.281 BaseBdev4' 00:09:37.281 23:27:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:37.281 23:27:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:37.281 23:27:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:37.281 23:27:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:09:37.281 23:27:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:37.281 23:27:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:37.281 23:27:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.281 23:27:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:37.281 23:27:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:37.281 23:27:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:37.281 23:27:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:37.281 23:27:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:37.281 23:27:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:37.281 23:27:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:37.281 23:27:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.281 23:27:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:37.281 23:27:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:37.281 23:27:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:37.281 23:27:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:37.281 23:27:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:37.281 23:27:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:37.281 23:27:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:37.281 23:27:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.281 23:27:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:37.281 23:27:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:37.281 23:27:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:37.281 23:27:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:37.281 23:27:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:09:37.281 23:27:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:37.281 23:27:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.281 23:27:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:37.540 23:27:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:37.540 23:27:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:37.540 23:27:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:37.540 23:27:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:37.540 23:27:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:37.540 23:27:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.540 [2024-09-30 23:27:17.175150] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:37.540 [2024-09-30 23:27:17.175176] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:37.540 [2024-09-30 23:27:17.175253] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:37.540 [2024-09-30 23:27:17.175318] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:37.540 [2024-09-30 23:27:17.175327] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name Existed_Raid, state offline 00:09:37.540 23:27:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:37.540 23:27:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 80406 00:09:37.540 23:27:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 80406 ']' 00:09:37.540 23:27:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 80406 00:09:37.540 23:27:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:09:37.540 23:27:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:37.540 23:27:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 80406 00:09:37.540 23:27:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:37.540 23:27:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:37.540 23:27:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 80406' 00:09:37.540 killing process with pid 80406 00:09:37.540 23:27:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 80406 00:09:37.540 [2024-09-30 23:27:17.223230] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:37.540 23:27:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 80406 00:09:37.540 [2024-09-30 23:27:17.264749] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:37.798 23:27:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:09:37.798 00:09:37.798 real 0m9.598s 00:09:37.798 user 0m16.317s 00:09:37.798 sys 0m2.071s 00:09:37.798 23:27:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:37.798 ************************************ 00:09:37.798 END TEST raid_state_function_test 00:09:37.798 ************************************ 00:09:37.798 23:27:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.798 23:27:17 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 4 true 00:09:37.798 23:27:17 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:09:37.798 23:27:17 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:37.798 23:27:17 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:37.798 ************************************ 00:09:37.798 START TEST raid_state_function_test_sb 00:09:37.798 ************************************ 00:09:37.798 23:27:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test raid0 4 true 00:09:37.798 23:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:09:37.798 23:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:09:37.798 23:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:09:37.798 23:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:37.798 23:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:37.798 23:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:37.798 23:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:37.798 23:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:37.798 23:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:37.798 23:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:37.798 23:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:37.798 23:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:37.798 23:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:09:37.798 23:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:37.798 23:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:37.798 23:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:09:37.798 23:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:37.798 23:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:37.798 23:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:09:37.798 23:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:37.798 23:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:37.798 23:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:37.798 23:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:37.798 23:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:37.798 23:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:09:37.798 23:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:09:37.798 23:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:09:37.798 23:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:09:37.798 23:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:09:37.798 23:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=81056 00:09:37.798 23:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:37.798 Process raid pid: 81056 00:09:37.798 23:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 81056' 00:09:37.798 23:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 81056 00:09:37.798 23:27:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 81056 ']' 00:09:37.798 23:27:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:37.798 23:27:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:37.798 23:27:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:37.798 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:37.798 23:27:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:37.798 23:27:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:38.055 [2024-09-30 23:27:17.676925] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:09:38.055 [2024-09-30 23:27:17.677105] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:38.055 [2024-09-30 23:27:17.838827] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:38.055 [2024-09-30 23:27:17.883063] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:38.313 [2024-09-30 23:27:17.925321] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:38.313 [2024-09-30 23:27:17.925365] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:38.880 23:27:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:38.880 23:27:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:09:38.880 23:27:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:09:38.880 23:27:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:38.880 23:27:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:38.880 [2024-09-30 23:27:18.510664] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:38.880 [2024-09-30 23:27:18.510713] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:38.880 [2024-09-30 23:27:18.510724] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:38.880 [2024-09-30 23:27:18.510734] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:38.880 [2024-09-30 23:27:18.510740] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:38.880 [2024-09-30 23:27:18.510752] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:38.880 [2024-09-30 23:27:18.510758] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:09:38.880 [2024-09-30 23:27:18.510766] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:09:38.880 23:27:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:38.880 23:27:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:38.880 23:27:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:38.880 23:27:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:38.880 23:27:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:38.880 23:27:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:38.880 23:27:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:38.880 23:27:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:38.880 23:27:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:38.880 23:27:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:38.880 23:27:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:38.880 23:27:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:38.880 23:27:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:38.880 23:27:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:38.880 23:27:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:38.880 23:27:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:38.880 23:27:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:38.880 "name": "Existed_Raid", 00:09:38.880 "uuid": "9bd38e62-d745-4a9d-997f-970eadccbe2e", 00:09:38.880 "strip_size_kb": 64, 00:09:38.880 "state": "configuring", 00:09:38.880 "raid_level": "raid0", 00:09:38.880 "superblock": true, 00:09:38.880 "num_base_bdevs": 4, 00:09:38.880 "num_base_bdevs_discovered": 0, 00:09:38.880 "num_base_bdevs_operational": 4, 00:09:38.880 "base_bdevs_list": [ 00:09:38.880 { 00:09:38.880 "name": "BaseBdev1", 00:09:38.880 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:38.880 "is_configured": false, 00:09:38.880 "data_offset": 0, 00:09:38.880 "data_size": 0 00:09:38.880 }, 00:09:38.880 { 00:09:38.880 "name": "BaseBdev2", 00:09:38.880 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:38.880 "is_configured": false, 00:09:38.880 "data_offset": 0, 00:09:38.880 "data_size": 0 00:09:38.880 }, 00:09:38.880 { 00:09:38.880 "name": "BaseBdev3", 00:09:38.880 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:38.880 "is_configured": false, 00:09:38.880 "data_offset": 0, 00:09:38.880 "data_size": 0 00:09:38.880 }, 00:09:38.881 { 00:09:38.881 "name": "BaseBdev4", 00:09:38.881 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:38.881 "is_configured": false, 00:09:38.881 "data_offset": 0, 00:09:38.881 "data_size": 0 00:09:38.881 } 00:09:38.881 ] 00:09:38.881 }' 00:09:38.881 23:27:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:38.881 23:27:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:39.140 23:27:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:39.140 23:27:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:39.140 23:27:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:39.140 [2024-09-30 23:27:18.953782] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:39.140 [2024-09-30 23:27:18.953831] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring 00:09:39.140 23:27:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:39.140 23:27:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:09:39.140 23:27:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:39.140 23:27:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:39.140 [2024-09-30 23:27:18.965818] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:39.140 [2024-09-30 23:27:18.965926] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:39.140 [2024-09-30 23:27:18.965955] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:39.140 [2024-09-30 23:27:18.965980] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:39.140 [2024-09-30 23:27:18.965999] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:39.140 [2024-09-30 23:27:18.966020] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:39.140 [2024-09-30 23:27:18.966047] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:09:39.140 [2024-09-30 23:27:18.966072] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:09:39.140 23:27:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:39.140 23:27:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:39.140 23:27:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:39.140 23:27:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:39.140 [2024-09-30 23:27:18.986609] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:39.140 BaseBdev1 00:09:39.140 23:27:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:39.140 23:27:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:09:39.140 23:27:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:09:39.140 23:27:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:39.140 23:27:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:39.140 23:27:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:39.140 23:27:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:39.140 23:27:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:39.140 23:27:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:39.140 23:27:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:39.402 23:27:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:39.403 23:27:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:39.403 23:27:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:39.403 23:27:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:39.403 [ 00:09:39.403 { 00:09:39.403 "name": "BaseBdev1", 00:09:39.403 "aliases": [ 00:09:39.403 "bf868686-b130-41e0-8d52-86367d60c775" 00:09:39.403 ], 00:09:39.403 "product_name": "Malloc disk", 00:09:39.403 "block_size": 512, 00:09:39.403 "num_blocks": 65536, 00:09:39.403 "uuid": "bf868686-b130-41e0-8d52-86367d60c775", 00:09:39.403 "assigned_rate_limits": { 00:09:39.403 "rw_ios_per_sec": 0, 00:09:39.403 "rw_mbytes_per_sec": 0, 00:09:39.403 "r_mbytes_per_sec": 0, 00:09:39.403 "w_mbytes_per_sec": 0 00:09:39.403 }, 00:09:39.403 "claimed": true, 00:09:39.403 "claim_type": "exclusive_write", 00:09:39.403 "zoned": false, 00:09:39.403 "supported_io_types": { 00:09:39.403 "read": true, 00:09:39.403 "write": true, 00:09:39.403 "unmap": true, 00:09:39.403 "flush": true, 00:09:39.403 "reset": true, 00:09:39.403 "nvme_admin": false, 00:09:39.403 "nvme_io": false, 00:09:39.403 "nvme_io_md": false, 00:09:39.403 "write_zeroes": true, 00:09:39.403 "zcopy": true, 00:09:39.403 "get_zone_info": false, 00:09:39.403 "zone_management": false, 00:09:39.403 "zone_append": false, 00:09:39.403 "compare": false, 00:09:39.403 "compare_and_write": false, 00:09:39.403 "abort": true, 00:09:39.403 "seek_hole": false, 00:09:39.403 "seek_data": false, 00:09:39.403 "copy": true, 00:09:39.403 "nvme_iov_md": false 00:09:39.403 }, 00:09:39.403 "memory_domains": [ 00:09:39.403 { 00:09:39.403 "dma_device_id": "system", 00:09:39.403 "dma_device_type": 1 00:09:39.403 }, 00:09:39.403 { 00:09:39.403 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:39.403 "dma_device_type": 2 00:09:39.403 } 00:09:39.403 ], 00:09:39.403 "driver_specific": {} 00:09:39.403 } 00:09:39.403 ] 00:09:39.403 23:27:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:39.403 23:27:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:39.403 23:27:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:39.403 23:27:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:39.403 23:27:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:39.403 23:27:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:39.403 23:27:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:39.403 23:27:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:39.403 23:27:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:39.403 23:27:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:39.403 23:27:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:39.403 23:27:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:39.403 23:27:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:39.403 23:27:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:39.403 23:27:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:39.403 23:27:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:39.403 23:27:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:39.403 23:27:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:39.403 "name": "Existed_Raid", 00:09:39.403 "uuid": "a44bcc22-d296-4be8-9da5-fb3c12ee45f6", 00:09:39.403 "strip_size_kb": 64, 00:09:39.403 "state": "configuring", 00:09:39.403 "raid_level": "raid0", 00:09:39.403 "superblock": true, 00:09:39.403 "num_base_bdevs": 4, 00:09:39.403 "num_base_bdevs_discovered": 1, 00:09:39.403 "num_base_bdevs_operational": 4, 00:09:39.403 "base_bdevs_list": [ 00:09:39.403 { 00:09:39.403 "name": "BaseBdev1", 00:09:39.403 "uuid": "bf868686-b130-41e0-8d52-86367d60c775", 00:09:39.403 "is_configured": true, 00:09:39.403 "data_offset": 2048, 00:09:39.403 "data_size": 63488 00:09:39.403 }, 00:09:39.403 { 00:09:39.403 "name": "BaseBdev2", 00:09:39.403 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:39.403 "is_configured": false, 00:09:39.403 "data_offset": 0, 00:09:39.403 "data_size": 0 00:09:39.403 }, 00:09:39.403 { 00:09:39.403 "name": "BaseBdev3", 00:09:39.403 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:39.403 "is_configured": false, 00:09:39.403 "data_offset": 0, 00:09:39.403 "data_size": 0 00:09:39.403 }, 00:09:39.403 { 00:09:39.403 "name": "BaseBdev4", 00:09:39.403 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:39.403 "is_configured": false, 00:09:39.403 "data_offset": 0, 00:09:39.403 "data_size": 0 00:09:39.403 } 00:09:39.403 ] 00:09:39.403 }' 00:09:39.403 23:27:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:39.403 23:27:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:39.663 23:27:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:39.663 23:27:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:39.663 23:27:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:39.663 [2024-09-30 23:27:19.477813] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:39.663 [2024-09-30 23:27:19.477915] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring 00:09:39.663 23:27:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:39.663 23:27:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:09:39.663 23:27:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:39.663 23:27:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:39.663 [2024-09-30 23:27:19.485840] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:39.663 [2024-09-30 23:27:19.487695] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:39.663 [2024-09-30 23:27:19.487738] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:39.663 [2024-09-30 23:27:19.487748] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:39.663 [2024-09-30 23:27:19.487757] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:39.663 [2024-09-30 23:27:19.487763] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:09:39.663 [2024-09-30 23:27:19.487772] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:09:39.663 23:27:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:39.663 23:27:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:09:39.663 23:27:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:39.663 23:27:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:39.663 23:27:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:39.663 23:27:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:39.663 23:27:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:39.664 23:27:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:39.664 23:27:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:39.664 23:27:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:39.664 23:27:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:39.664 23:27:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:39.664 23:27:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:39.664 23:27:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:39.664 23:27:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:39.664 23:27:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:39.664 23:27:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:39.664 23:27:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:39.922 23:27:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:39.922 "name": "Existed_Raid", 00:09:39.922 "uuid": "b0677414-8356-4853-8f51-e448ae152749", 00:09:39.922 "strip_size_kb": 64, 00:09:39.922 "state": "configuring", 00:09:39.922 "raid_level": "raid0", 00:09:39.922 "superblock": true, 00:09:39.922 "num_base_bdevs": 4, 00:09:39.922 "num_base_bdevs_discovered": 1, 00:09:39.922 "num_base_bdevs_operational": 4, 00:09:39.922 "base_bdevs_list": [ 00:09:39.922 { 00:09:39.922 "name": "BaseBdev1", 00:09:39.922 "uuid": "bf868686-b130-41e0-8d52-86367d60c775", 00:09:39.922 "is_configured": true, 00:09:39.922 "data_offset": 2048, 00:09:39.922 "data_size": 63488 00:09:39.922 }, 00:09:39.922 { 00:09:39.922 "name": "BaseBdev2", 00:09:39.922 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:39.922 "is_configured": false, 00:09:39.922 "data_offset": 0, 00:09:39.922 "data_size": 0 00:09:39.922 }, 00:09:39.922 { 00:09:39.922 "name": "BaseBdev3", 00:09:39.922 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:39.922 "is_configured": false, 00:09:39.922 "data_offset": 0, 00:09:39.922 "data_size": 0 00:09:39.922 }, 00:09:39.922 { 00:09:39.922 "name": "BaseBdev4", 00:09:39.922 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:39.922 "is_configured": false, 00:09:39.922 "data_offset": 0, 00:09:39.922 "data_size": 0 00:09:39.922 } 00:09:39.922 ] 00:09:39.922 }' 00:09:39.922 23:27:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:39.922 23:27:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:40.181 23:27:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:40.181 23:27:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:40.181 23:27:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:40.181 [2024-09-30 23:27:19.939342] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:40.181 BaseBdev2 00:09:40.181 23:27:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:40.181 23:27:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:09:40.181 23:27:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:09:40.181 23:27:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:40.181 23:27:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:40.181 23:27:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:40.181 23:27:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:40.181 23:27:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:40.181 23:27:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:40.181 23:27:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:40.181 23:27:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:40.181 23:27:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:40.181 23:27:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:40.181 23:27:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:40.181 [ 00:09:40.181 { 00:09:40.181 "name": "BaseBdev2", 00:09:40.181 "aliases": [ 00:09:40.181 "65a3d501-bebd-49c9-96fe-5c5920976b77" 00:09:40.181 ], 00:09:40.181 "product_name": "Malloc disk", 00:09:40.181 "block_size": 512, 00:09:40.181 "num_blocks": 65536, 00:09:40.181 "uuid": "65a3d501-bebd-49c9-96fe-5c5920976b77", 00:09:40.181 "assigned_rate_limits": { 00:09:40.181 "rw_ios_per_sec": 0, 00:09:40.181 "rw_mbytes_per_sec": 0, 00:09:40.181 "r_mbytes_per_sec": 0, 00:09:40.181 "w_mbytes_per_sec": 0 00:09:40.181 }, 00:09:40.181 "claimed": true, 00:09:40.181 "claim_type": "exclusive_write", 00:09:40.181 "zoned": false, 00:09:40.181 "supported_io_types": { 00:09:40.181 "read": true, 00:09:40.181 "write": true, 00:09:40.181 "unmap": true, 00:09:40.181 "flush": true, 00:09:40.181 "reset": true, 00:09:40.181 "nvme_admin": false, 00:09:40.181 "nvme_io": false, 00:09:40.181 "nvme_io_md": false, 00:09:40.181 "write_zeroes": true, 00:09:40.181 "zcopy": true, 00:09:40.181 "get_zone_info": false, 00:09:40.181 "zone_management": false, 00:09:40.181 "zone_append": false, 00:09:40.181 "compare": false, 00:09:40.181 "compare_and_write": false, 00:09:40.181 "abort": true, 00:09:40.181 "seek_hole": false, 00:09:40.181 "seek_data": false, 00:09:40.181 "copy": true, 00:09:40.181 "nvme_iov_md": false 00:09:40.181 }, 00:09:40.181 "memory_domains": [ 00:09:40.181 { 00:09:40.181 "dma_device_id": "system", 00:09:40.181 "dma_device_type": 1 00:09:40.181 }, 00:09:40.181 { 00:09:40.181 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:40.181 "dma_device_type": 2 00:09:40.181 } 00:09:40.181 ], 00:09:40.181 "driver_specific": {} 00:09:40.181 } 00:09:40.181 ] 00:09:40.181 23:27:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:40.181 23:27:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:40.181 23:27:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:40.181 23:27:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:40.181 23:27:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:40.181 23:27:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:40.181 23:27:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:40.181 23:27:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:40.181 23:27:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:40.181 23:27:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:40.181 23:27:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:40.181 23:27:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:40.181 23:27:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:40.181 23:27:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:40.181 23:27:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:40.181 23:27:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:40.181 23:27:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:40.181 23:27:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:40.181 23:27:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:40.181 23:27:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:40.181 "name": "Existed_Raid", 00:09:40.181 "uuid": "b0677414-8356-4853-8f51-e448ae152749", 00:09:40.181 "strip_size_kb": 64, 00:09:40.181 "state": "configuring", 00:09:40.181 "raid_level": "raid0", 00:09:40.181 "superblock": true, 00:09:40.181 "num_base_bdevs": 4, 00:09:40.181 "num_base_bdevs_discovered": 2, 00:09:40.181 "num_base_bdevs_operational": 4, 00:09:40.182 "base_bdevs_list": [ 00:09:40.182 { 00:09:40.182 "name": "BaseBdev1", 00:09:40.182 "uuid": "bf868686-b130-41e0-8d52-86367d60c775", 00:09:40.182 "is_configured": true, 00:09:40.182 "data_offset": 2048, 00:09:40.182 "data_size": 63488 00:09:40.182 }, 00:09:40.182 { 00:09:40.182 "name": "BaseBdev2", 00:09:40.182 "uuid": "65a3d501-bebd-49c9-96fe-5c5920976b77", 00:09:40.182 "is_configured": true, 00:09:40.182 "data_offset": 2048, 00:09:40.182 "data_size": 63488 00:09:40.182 }, 00:09:40.182 { 00:09:40.182 "name": "BaseBdev3", 00:09:40.182 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:40.182 "is_configured": false, 00:09:40.182 "data_offset": 0, 00:09:40.182 "data_size": 0 00:09:40.182 }, 00:09:40.182 { 00:09:40.182 "name": "BaseBdev4", 00:09:40.182 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:40.182 "is_configured": false, 00:09:40.182 "data_offset": 0, 00:09:40.182 "data_size": 0 00:09:40.182 } 00:09:40.182 ] 00:09:40.182 }' 00:09:40.182 23:27:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:40.182 23:27:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:40.793 23:27:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:40.793 23:27:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:40.793 23:27:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:40.793 [2024-09-30 23:27:20.445455] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:40.793 BaseBdev3 00:09:40.793 23:27:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:40.793 23:27:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:09:40.793 23:27:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:09:40.793 23:27:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:40.793 23:27:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:40.793 23:27:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:40.793 23:27:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:40.793 23:27:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:40.793 23:27:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:40.793 23:27:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:40.793 23:27:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:40.793 23:27:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:40.793 23:27:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:40.793 23:27:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:40.793 [ 00:09:40.793 { 00:09:40.793 "name": "BaseBdev3", 00:09:40.793 "aliases": [ 00:09:40.793 "2610c43e-7887-4ffe-a1ae-67944182f2f5" 00:09:40.793 ], 00:09:40.793 "product_name": "Malloc disk", 00:09:40.793 "block_size": 512, 00:09:40.793 "num_blocks": 65536, 00:09:40.793 "uuid": "2610c43e-7887-4ffe-a1ae-67944182f2f5", 00:09:40.793 "assigned_rate_limits": { 00:09:40.793 "rw_ios_per_sec": 0, 00:09:40.793 "rw_mbytes_per_sec": 0, 00:09:40.793 "r_mbytes_per_sec": 0, 00:09:40.793 "w_mbytes_per_sec": 0 00:09:40.793 }, 00:09:40.793 "claimed": true, 00:09:40.793 "claim_type": "exclusive_write", 00:09:40.793 "zoned": false, 00:09:40.793 "supported_io_types": { 00:09:40.793 "read": true, 00:09:40.793 "write": true, 00:09:40.793 "unmap": true, 00:09:40.793 "flush": true, 00:09:40.793 "reset": true, 00:09:40.793 "nvme_admin": false, 00:09:40.794 "nvme_io": false, 00:09:40.794 "nvme_io_md": false, 00:09:40.794 "write_zeroes": true, 00:09:40.794 "zcopy": true, 00:09:40.794 "get_zone_info": false, 00:09:40.794 "zone_management": false, 00:09:40.794 "zone_append": false, 00:09:40.794 "compare": false, 00:09:40.794 "compare_and_write": false, 00:09:40.794 "abort": true, 00:09:40.794 "seek_hole": false, 00:09:40.794 "seek_data": false, 00:09:40.794 "copy": true, 00:09:40.794 "nvme_iov_md": false 00:09:40.794 }, 00:09:40.794 "memory_domains": [ 00:09:40.794 { 00:09:40.794 "dma_device_id": "system", 00:09:40.794 "dma_device_type": 1 00:09:40.794 }, 00:09:40.794 { 00:09:40.794 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:40.794 "dma_device_type": 2 00:09:40.794 } 00:09:40.794 ], 00:09:40.794 "driver_specific": {} 00:09:40.794 } 00:09:40.794 ] 00:09:40.794 23:27:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:40.794 23:27:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:40.794 23:27:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:40.794 23:27:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:40.794 23:27:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:40.794 23:27:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:40.794 23:27:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:40.794 23:27:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:40.794 23:27:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:40.794 23:27:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:40.794 23:27:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:40.794 23:27:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:40.794 23:27:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:40.794 23:27:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:40.794 23:27:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:40.794 23:27:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:40.794 23:27:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:40.794 23:27:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:40.794 23:27:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:40.794 23:27:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:40.794 "name": "Existed_Raid", 00:09:40.794 "uuid": "b0677414-8356-4853-8f51-e448ae152749", 00:09:40.794 "strip_size_kb": 64, 00:09:40.794 "state": "configuring", 00:09:40.794 "raid_level": "raid0", 00:09:40.794 "superblock": true, 00:09:40.794 "num_base_bdevs": 4, 00:09:40.794 "num_base_bdevs_discovered": 3, 00:09:40.794 "num_base_bdevs_operational": 4, 00:09:40.794 "base_bdevs_list": [ 00:09:40.794 { 00:09:40.794 "name": "BaseBdev1", 00:09:40.794 "uuid": "bf868686-b130-41e0-8d52-86367d60c775", 00:09:40.794 "is_configured": true, 00:09:40.794 "data_offset": 2048, 00:09:40.794 "data_size": 63488 00:09:40.794 }, 00:09:40.794 { 00:09:40.794 "name": "BaseBdev2", 00:09:40.794 "uuid": "65a3d501-bebd-49c9-96fe-5c5920976b77", 00:09:40.794 "is_configured": true, 00:09:40.794 "data_offset": 2048, 00:09:40.794 "data_size": 63488 00:09:40.794 }, 00:09:40.794 { 00:09:40.794 "name": "BaseBdev3", 00:09:40.794 "uuid": "2610c43e-7887-4ffe-a1ae-67944182f2f5", 00:09:40.794 "is_configured": true, 00:09:40.794 "data_offset": 2048, 00:09:40.794 "data_size": 63488 00:09:40.794 }, 00:09:40.794 { 00:09:40.794 "name": "BaseBdev4", 00:09:40.794 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:40.794 "is_configured": false, 00:09:40.794 "data_offset": 0, 00:09:40.794 "data_size": 0 00:09:40.794 } 00:09:40.794 ] 00:09:40.794 }' 00:09:40.794 23:27:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:40.794 23:27:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:41.364 23:27:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:09:41.364 23:27:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:41.364 23:27:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:41.364 [2024-09-30 23:27:20.955608] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:09:41.364 [2024-09-30 23:27:20.955901] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:09:41.364 [2024-09-30 23:27:20.955957] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:09:41.364 BaseBdev4 00:09:41.364 [2024-09-30 23:27:20.956282] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:09:41.364 [2024-09-30 23:27:20.956431] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:09:41.364 [2024-09-30 23:27:20.956445] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980 00:09:41.364 [2024-09-30 23:27:20.956563] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:41.364 23:27:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:41.364 23:27:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:09:41.364 23:27:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:09:41.364 23:27:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:41.364 23:27:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:41.364 23:27:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:41.364 23:27:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:41.364 23:27:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:41.364 23:27:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:41.364 23:27:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:41.364 23:27:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:41.364 23:27:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:09:41.364 23:27:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:41.364 23:27:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:41.364 [ 00:09:41.364 { 00:09:41.364 "name": "BaseBdev4", 00:09:41.364 "aliases": [ 00:09:41.364 "91e7b861-427e-408d-b437-3720e100b775" 00:09:41.364 ], 00:09:41.364 "product_name": "Malloc disk", 00:09:41.364 "block_size": 512, 00:09:41.364 "num_blocks": 65536, 00:09:41.364 "uuid": "91e7b861-427e-408d-b437-3720e100b775", 00:09:41.364 "assigned_rate_limits": { 00:09:41.364 "rw_ios_per_sec": 0, 00:09:41.364 "rw_mbytes_per_sec": 0, 00:09:41.364 "r_mbytes_per_sec": 0, 00:09:41.364 "w_mbytes_per_sec": 0 00:09:41.364 }, 00:09:41.364 "claimed": true, 00:09:41.364 "claim_type": "exclusive_write", 00:09:41.364 "zoned": false, 00:09:41.364 "supported_io_types": { 00:09:41.364 "read": true, 00:09:41.364 "write": true, 00:09:41.364 "unmap": true, 00:09:41.364 "flush": true, 00:09:41.364 "reset": true, 00:09:41.364 "nvme_admin": false, 00:09:41.364 "nvme_io": false, 00:09:41.364 "nvme_io_md": false, 00:09:41.364 "write_zeroes": true, 00:09:41.364 "zcopy": true, 00:09:41.364 "get_zone_info": false, 00:09:41.364 "zone_management": false, 00:09:41.364 "zone_append": false, 00:09:41.364 "compare": false, 00:09:41.364 "compare_and_write": false, 00:09:41.364 "abort": true, 00:09:41.364 "seek_hole": false, 00:09:41.364 "seek_data": false, 00:09:41.364 "copy": true, 00:09:41.364 "nvme_iov_md": false 00:09:41.364 }, 00:09:41.364 "memory_domains": [ 00:09:41.364 { 00:09:41.364 "dma_device_id": "system", 00:09:41.364 "dma_device_type": 1 00:09:41.364 }, 00:09:41.364 { 00:09:41.364 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:41.364 "dma_device_type": 2 00:09:41.364 } 00:09:41.365 ], 00:09:41.365 "driver_specific": {} 00:09:41.365 } 00:09:41.365 ] 00:09:41.365 23:27:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:41.365 23:27:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:41.365 23:27:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:41.365 23:27:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:41.365 23:27:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:09:41.365 23:27:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:41.365 23:27:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:41.365 23:27:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:41.365 23:27:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:41.365 23:27:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:41.365 23:27:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:41.365 23:27:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:41.365 23:27:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:41.365 23:27:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:41.365 23:27:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:41.365 23:27:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:41.365 23:27:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:41.365 23:27:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:41.365 23:27:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:41.365 23:27:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:41.365 "name": "Existed_Raid", 00:09:41.365 "uuid": "b0677414-8356-4853-8f51-e448ae152749", 00:09:41.365 "strip_size_kb": 64, 00:09:41.365 "state": "online", 00:09:41.365 "raid_level": "raid0", 00:09:41.365 "superblock": true, 00:09:41.365 "num_base_bdevs": 4, 00:09:41.365 "num_base_bdevs_discovered": 4, 00:09:41.365 "num_base_bdevs_operational": 4, 00:09:41.365 "base_bdevs_list": [ 00:09:41.365 { 00:09:41.365 "name": "BaseBdev1", 00:09:41.365 "uuid": "bf868686-b130-41e0-8d52-86367d60c775", 00:09:41.365 "is_configured": true, 00:09:41.365 "data_offset": 2048, 00:09:41.365 "data_size": 63488 00:09:41.365 }, 00:09:41.365 { 00:09:41.365 "name": "BaseBdev2", 00:09:41.365 "uuid": "65a3d501-bebd-49c9-96fe-5c5920976b77", 00:09:41.365 "is_configured": true, 00:09:41.365 "data_offset": 2048, 00:09:41.365 "data_size": 63488 00:09:41.365 }, 00:09:41.365 { 00:09:41.365 "name": "BaseBdev3", 00:09:41.365 "uuid": "2610c43e-7887-4ffe-a1ae-67944182f2f5", 00:09:41.365 "is_configured": true, 00:09:41.365 "data_offset": 2048, 00:09:41.365 "data_size": 63488 00:09:41.365 }, 00:09:41.365 { 00:09:41.365 "name": "BaseBdev4", 00:09:41.365 "uuid": "91e7b861-427e-408d-b437-3720e100b775", 00:09:41.365 "is_configured": true, 00:09:41.365 "data_offset": 2048, 00:09:41.365 "data_size": 63488 00:09:41.365 } 00:09:41.365 ] 00:09:41.365 }' 00:09:41.365 23:27:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:41.365 23:27:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:41.631 23:27:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:09:41.631 23:27:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:41.631 23:27:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:41.631 23:27:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:41.631 23:27:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:09:41.631 23:27:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:41.631 23:27:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:41.631 23:27:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:41.631 23:27:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:41.631 23:27:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:41.631 [2024-09-30 23:27:21.435183] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:41.631 23:27:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:41.631 23:27:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:41.631 "name": "Existed_Raid", 00:09:41.631 "aliases": [ 00:09:41.631 "b0677414-8356-4853-8f51-e448ae152749" 00:09:41.631 ], 00:09:41.631 "product_name": "Raid Volume", 00:09:41.631 "block_size": 512, 00:09:41.631 "num_blocks": 253952, 00:09:41.631 "uuid": "b0677414-8356-4853-8f51-e448ae152749", 00:09:41.631 "assigned_rate_limits": { 00:09:41.631 "rw_ios_per_sec": 0, 00:09:41.631 "rw_mbytes_per_sec": 0, 00:09:41.631 "r_mbytes_per_sec": 0, 00:09:41.631 "w_mbytes_per_sec": 0 00:09:41.631 }, 00:09:41.631 "claimed": false, 00:09:41.631 "zoned": false, 00:09:41.631 "supported_io_types": { 00:09:41.631 "read": true, 00:09:41.631 "write": true, 00:09:41.631 "unmap": true, 00:09:41.631 "flush": true, 00:09:41.631 "reset": true, 00:09:41.631 "nvme_admin": false, 00:09:41.631 "nvme_io": false, 00:09:41.631 "nvme_io_md": false, 00:09:41.631 "write_zeroes": true, 00:09:41.631 "zcopy": false, 00:09:41.631 "get_zone_info": false, 00:09:41.631 "zone_management": false, 00:09:41.631 "zone_append": false, 00:09:41.631 "compare": false, 00:09:41.631 "compare_and_write": false, 00:09:41.631 "abort": false, 00:09:41.631 "seek_hole": false, 00:09:41.631 "seek_data": false, 00:09:41.631 "copy": false, 00:09:41.631 "nvme_iov_md": false 00:09:41.631 }, 00:09:41.631 "memory_domains": [ 00:09:41.631 { 00:09:41.631 "dma_device_id": "system", 00:09:41.631 "dma_device_type": 1 00:09:41.631 }, 00:09:41.631 { 00:09:41.631 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:41.631 "dma_device_type": 2 00:09:41.631 }, 00:09:41.631 { 00:09:41.631 "dma_device_id": "system", 00:09:41.631 "dma_device_type": 1 00:09:41.631 }, 00:09:41.631 { 00:09:41.631 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:41.631 "dma_device_type": 2 00:09:41.631 }, 00:09:41.631 { 00:09:41.631 "dma_device_id": "system", 00:09:41.631 "dma_device_type": 1 00:09:41.631 }, 00:09:41.631 { 00:09:41.631 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:41.631 "dma_device_type": 2 00:09:41.631 }, 00:09:41.631 { 00:09:41.631 "dma_device_id": "system", 00:09:41.631 "dma_device_type": 1 00:09:41.631 }, 00:09:41.631 { 00:09:41.631 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:41.631 "dma_device_type": 2 00:09:41.631 } 00:09:41.631 ], 00:09:41.631 "driver_specific": { 00:09:41.631 "raid": { 00:09:41.631 "uuid": "b0677414-8356-4853-8f51-e448ae152749", 00:09:41.631 "strip_size_kb": 64, 00:09:41.631 "state": "online", 00:09:41.631 "raid_level": "raid0", 00:09:41.631 "superblock": true, 00:09:41.631 "num_base_bdevs": 4, 00:09:41.631 "num_base_bdevs_discovered": 4, 00:09:41.631 "num_base_bdevs_operational": 4, 00:09:41.631 "base_bdevs_list": [ 00:09:41.631 { 00:09:41.631 "name": "BaseBdev1", 00:09:41.631 "uuid": "bf868686-b130-41e0-8d52-86367d60c775", 00:09:41.631 "is_configured": true, 00:09:41.631 "data_offset": 2048, 00:09:41.631 "data_size": 63488 00:09:41.631 }, 00:09:41.631 { 00:09:41.631 "name": "BaseBdev2", 00:09:41.631 "uuid": "65a3d501-bebd-49c9-96fe-5c5920976b77", 00:09:41.631 "is_configured": true, 00:09:41.631 "data_offset": 2048, 00:09:41.631 "data_size": 63488 00:09:41.631 }, 00:09:41.631 { 00:09:41.631 "name": "BaseBdev3", 00:09:41.631 "uuid": "2610c43e-7887-4ffe-a1ae-67944182f2f5", 00:09:41.631 "is_configured": true, 00:09:41.631 "data_offset": 2048, 00:09:41.631 "data_size": 63488 00:09:41.631 }, 00:09:41.631 { 00:09:41.631 "name": "BaseBdev4", 00:09:41.631 "uuid": "91e7b861-427e-408d-b437-3720e100b775", 00:09:41.631 "is_configured": true, 00:09:41.631 "data_offset": 2048, 00:09:41.631 "data_size": 63488 00:09:41.631 } 00:09:41.631 ] 00:09:41.631 } 00:09:41.631 } 00:09:41.631 }' 00:09:41.631 23:27:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:41.913 23:27:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:09:41.913 BaseBdev2 00:09:41.913 BaseBdev3 00:09:41.913 BaseBdev4' 00:09:41.913 23:27:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:41.913 23:27:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:41.913 23:27:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:41.913 23:27:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:09:41.913 23:27:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:41.913 23:27:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:41.913 23:27:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:41.913 23:27:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:41.913 23:27:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:41.913 23:27:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:41.913 23:27:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:41.913 23:27:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:41.913 23:27:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:41.913 23:27:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:41.913 23:27:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:41.913 23:27:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:41.913 23:27:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:41.913 23:27:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:41.913 23:27:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:41.913 23:27:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:41.913 23:27:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:41.913 23:27:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:41.913 23:27:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:41.913 23:27:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:41.913 23:27:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:41.913 23:27:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:41.913 23:27:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:41.913 23:27:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:41.913 23:27:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:09:41.913 23:27:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:41.913 23:27:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:41.913 23:27:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:41.913 23:27:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:41.913 23:27:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:41.913 23:27:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:41.913 23:27:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:41.913 23:27:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:41.913 [2024-09-30 23:27:21.734328] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:41.913 [2024-09-30 23:27:21.734399] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:41.913 [2024-09-30 23:27:21.734488] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:41.913 23:27:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:41.913 23:27:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:09:41.913 23:27:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:09:41.913 23:27:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:41.913 23:27:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:09:41.913 23:27:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:09:41.913 23:27:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:09:41.913 23:27:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:41.913 23:27:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:09:41.913 23:27:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:41.913 23:27:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:41.913 23:27:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:41.913 23:27:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:41.913 23:27:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:41.913 23:27:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:41.913 23:27:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:41.913 23:27:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:41.913 23:27:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:41.913 23:27:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:41.913 23:27:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:42.176 23:27:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:42.176 23:27:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:42.176 "name": "Existed_Raid", 00:09:42.176 "uuid": "b0677414-8356-4853-8f51-e448ae152749", 00:09:42.176 "strip_size_kb": 64, 00:09:42.176 "state": "offline", 00:09:42.176 "raid_level": "raid0", 00:09:42.176 "superblock": true, 00:09:42.176 "num_base_bdevs": 4, 00:09:42.176 "num_base_bdevs_discovered": 3, 00:09:42.176 "num_base_bdevs_operational": 3, 00:09:42.176 "base_bdevs_list": [ 00:09:42.176 { 00:09:42.176 "name": null, 00:09:42.176 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:42.176 "is_configured": false, 00:09:42.176 "data_offset": 0, 00:09:42.176 "data_size": 63488 00:09:42.176 }, 00:09:42.176 { 00:09:42.176 "name": "BaseBdev2", 00:09:42.176 "uuid": "65a3d501-bebd-49c9-96fe-5c5920976b77", 00:09:42.176 "is_configured": true, 00:09:42.176 "data_offset": 2048, 00:09:42.176 "data_size": 63488 00:09:42.176 }, 00:09:42.176 { 00:09:42.176 "name": "BaseBdev3", 00:09:42.176 "uuid": "2610c43e-7887-4ffe-a1ae-67944182f2f5", 00:09:42.176 "is_configured": true, 00:09:42.176 "data_offset": 2048, 00:09:42.176 "data_size": 63488 00:09:42.176 }, 00:09:42.176 { 00:09:42.176 "name": "BaseBdev4", 00:09:42.176 "uuid": "91e7b861-427e-408d-b437-3720e100b775", 00:09:42.176 "is_configured": true, 00:09:42.176 "data_offset": 2048, 00:09:42.176 "data_size": 63488 00:09:42.176 } 00:09:42.176 ] 00:09:42.176 }' 00:09:42.176 23:27:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:42.176 23:27:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:42.434 23:27:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:09:42.434 23:27:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:42.434 23:27:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:42.434 23:27:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:42.434 23:27:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:42.434 23:27:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:42.434 23:27:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:42.434 23:27:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:42.434 23:27:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:42.434 23:27:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:09:42.435 23:27:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:42.435 23:27:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:42.435 [2024-09-30 23:27:22.196781] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:42.435 23:27:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:42.435 23:27:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:42.435 23:27:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:42.435 23:27:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:42.435 23:27:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:42.435 23:27:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:42.435 23:27:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:42.435 23:27:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:42.435 23:27:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:42.435 23:27:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:42.435 23:27:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:09:42.435 23:27:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:42.435 23:27:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:42.435 [2024-09-30 23:27:22.263985] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:42.435 23:27:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:42.435 23:27:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:42.435 23:27:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:42.435 23:27:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:42.435 23:27:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:42.435 23:27:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:42.435 23:27:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:42.693 23:27:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:42.693 23:27:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:42.693 23:27:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:42.693 23:27:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:09:42.693 23:27:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:42.693 23:27:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:42.693 [2024-09-30 23:27:22.319146] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:09:42.693 [2024-09-30 23:27:22.319188] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline 00:09:42.693 23:27:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:42.693 23:27:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:42.694 23:27:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:42.694 23:27:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:42.694 23:27:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:09:42.694 23:27:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:42.694 23:27:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:42.694 23:27:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:42.694 23:27:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:09:42.694 23:27:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:09:42.694 23:27:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:09:42.694 23:27:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:09:42.694 23:27:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:42.694 23:27:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:42.694 23:27:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:42.694 23:27:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:42.694 BaseBdev2 00:09:42.694 23:27:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:42.694 23:27:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:09:42.694 23:27:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:09:42.694 23:27:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:42.694 23:27:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:42.694 23:27:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:42.694 23:27:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:42.694 23:27:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:42.694 23:27:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:42.694 23:27:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:42.694 23:27:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:42.694 23:27:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:42.694 23:27:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:42.694 23:27:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:42.694 [ 00:09:42.694 { 00:09:42.694 "name": "BaseBdev2", 00:09:42.694 "aliases": [ 00:09:42.694 "9a7b6ed3-38e5-40b4-adcc-e9b272adbccb" 00:09:42.694 ], 00:09:42.694 "product_name": "Malloc disk", 00:09:42.694 "block_size": 512, 00:09:42.694 "num_blocks": 65536, 00:09:42.694 "uuid": "9a7b6ed3-38e5-40b4-adcc-e9b272adbccb", 00:09:42.694 "assigned_rate_limits": { 00:09:42.694 "rw_ios_per_sec": 0, 00:09:42.694 "rw_mbytes_per_sec": 0, 00:09:42.694 "r_mbytes_per_sec": 0, 00:09:42.694 "w_mbytes_per_sec": 0 00:09:42.694 }, 00:09:42.694 "claimed": false, 00:09:42.694 "zoned": false, 00:09:42.694 "supported_io_types": { 00:09:42.694 "read": true, 00:09:42.694 "write": true, 00:09:42.694 "unmap": true, 00:09:42.694 "flush": true, 00:09:42.694 "reset": true, 00:09:42.694 "nvme_admin": false, 00:09:42.694 "nvme_io": false, 00:09:42.694 "nvme_io_md": false, 00:09:42.694 "write_zeroes": true, 00:09:42.694 "zcopy": true, 00:09:42.694 "get_zone_info": false, 00:09:42.694 "zone_management": false, 00:09:42.694 "zone_append": false, 00:09:42.694 "compare": false, 00:09:42.694 "compare_and_write": false, 00:09:42.694 "abort": true, 00:09:42.694 "seek_hole": false, 00:09:42.694 "seek_data": false, 00:09:42.694 "copy": true, 00:09:42.694 "nvme_iov_md": false 00:09:42.694 }, 00:09:42.694 "memory_domains": [ 00:09:42.694 { 00:09:42.694 "dma_device_id": "system", 00:09:42.694 "dma_device_type": 1 00:09:42.694 }, 00:09:42.694 { 00:09:42.694 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:42.694 "dma_device_type": 2 00:09:42.694 } 00:09:42.694 ], 00:09:42.694 "driver_specific": {} 00:09:42.694 } 00:09:42.694 ] 00:09:42.694 23:27:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:42.694 23:27:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:42.694 23:27:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:42.694 23:27:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:42.694 23:27:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:42.694 23:27:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:42.694 23:27:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:42.694 BaseBdev3 00:09:42.694 23:27:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:42.694 23:27:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:09:42.694 23:27:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:09:42.694 23:27:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:42.694 23:27:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:42.694 23:27:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:42.694 23:27:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:42.694 23:27:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:42.694 23:27:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:42.694 23:27:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:42.694 23:27:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:42.694 23:27:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:42.694 23:27:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:42.694 23:27:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:42.694 [ 00:09:42.694 { 00:09:42.694 "name": "BaseBdev3", 00:09:42.694 "aliases": [ 00:09:42.694 "f822e538-fc5c-447e-970a-b8f95453797a" 00:09:42.694 ], 00:09:42.694 "product_name": "Malloc disk", 00:09:42.694 "block_size": 512, 00:09:42.694 "num_blocks": 65536, 00:09:42.694 "uuid": "f822e538-fc5c-447e-970a-b8f95453797a", 00:09:42.694 "assigned_rate_limits": { 00:09:42.694 "rw_ios_per_sec": 0, 00:09:42.694 "rw_mbytes_per_sec": 0, 00:09:42.694 "r_mbytes_per_sec": 0, 00:09:42.694 "w_mbytes_per_sec": 0 00:09:42.694 }, 00:09:42.694 "claimed": false, 00:09:42.694 "zoned": false, 00:09:42.694 "supported_io_types": { 00:09:42.694 "read": true, 00:09:42.694 "write": true, 00:09:42.694 "unmap": true, 00:09:42.694 "flush": true, 00:09:42.694 "reset": true, 00:09:42.694 "nvme_admin": false, 00:09:42.694 "nvme_io": false, 00:09:42.694 "nvme_io_md": false, 00:09:42.694 "write_zeroes": true, 00:09:42.694 "zcopy": true, 00:09:42.694 "get_zone_info": false, 00:09:42.694 "zone_management": false, 00:09:42.694 "zone_append": false, 00:09:42.694 "compare": false, 00:09:42.694 "compare_and_write": false, 00:09:42.694 "abort": true, 00:09:42.694 "seek_hole": false, 00:09:42.694 "seek_data": false, 00:09:42.694 "copy": true, 00:09:42.694 "nvme_iov_md": false 00:09:42.694 }, 00:09:42.694 "memory_domains": [ 00:09:42.694 { 00:09:42.694 "dma_device_id": "system", 00:09:42.694 "dma_device_type": 1 00:09:42.694 }, 00:09:42.694 { 00:09:42.694 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:42.694 "dma_device_type": 2 00:09:42.694 } 00:09:42.694 ], 00:09:42.694 "driver_specific": {} 00:09:42.694 } 00:09:42.694 ] 00:09:42.694 23:27:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:42.694 23:27:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:42.694 23:27:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:42.694 23:27:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:42.694 23:27:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:09:42.694 23:27:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:42.694 23:27:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:42.694 BaseBdev4 00:09:42.694 23:27:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:42.694 23:27:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:09:42.694 23:27:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:09:42.694 23:27:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:42.694 23:27:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:42.694 23:27:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:42.694 23:27:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:42.694 23:27:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:42.694 23:27:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:42.694 23:27:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:42.695 23:27:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:42.695 23:27:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:09:42.695 23:27:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:42.695 23:27:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:42.695 [ 00:09:42.695 { 00:09:42.695 "name": "BaseBdev4", 00:09:42.695 "aliases": [ 00:09:42.695 "047921c3-b321-417b-9d55-022a582ce910" 00:09:42.695 ], 00:09:42.695 "product_name": "Malloc disk", 00:09:42.695 "block_size": 512, 00:09:42.695 "num_blocks": 65536, 00:09:42.695 "uuid": "047921c3-b321-417b-9d55-022a582ce910", 00:09:42.695 "assigned_rate_limits": { 00:09:42.695 "rw_ios_per_sec": 0, 00:09:42.695 "rw_mbytes_per_sec": 0, 00:09:42.695 "r_mbytes_per_sec": 0, 00:09:42.695 "w_mbytes_per_sec": 0 00:09:42.695 }, 00:09:42.695 "claimed": false, 00:09:42.695 "zoned": false, 00:09:42.695 "supported_io_types": { 00:09:42.695 "read": true, 00:09:42.695 "write": true, 00:09:42.695 "unmap": true, 00:09:42.695 "flush": true, 00:09:42.695 "reset": true, 00:09:42.695 "nvme_admin": false, 00:09:42.695 "nvme_io": false, 00:09:42.695 "nvme_io_md": false, 00:09:42.695 "write_zeroes": true, 00:09:42.695 "zcopy": true, 00:09:42.695 "get_zone_info": false, 00:09:42.695 "zone_management": false, 00:09:42.695 "zone_append": false, 00:09:42.695 "compare": false, 00:09:42.695 "compare_and_write": false, 00:09:42.695 "abort": true, 00:09:42.695 "seek_hole": false, 00:09:42.695 "seek_data": false, 00:09:42.695 "copy": true, 00:09:42.695 "nvme_iov_md": false 00:09:42.695 }, 00:09:42.695 "memory_domains": [ 00:09:42.695 { 00:09:42.695 "dma_device_id": "system", 00:09:42.695 "dma_device_type": 1 00:09:42.695 }, 00:09:42.695 { 00:09:42.695 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:42.954 "dma_device_type": 2 00:09:42.954 } 00:09:42.954 ], 00:09:42.954 "driver_specific": {} 00:09:42.954 } 00:09:42.954 ] 00:09:42.954 23:27:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:42.954 23:27:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:42.954 23:27:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:42.954 23:27:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:42.954 23:27:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:09:42.954 23:27:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:42.954 23:27:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:42.954 [2024-09-30 23:27:22.555196] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:42.954 [2024-09-30 23:27:22.555283] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:42.954 [2024-09-30 23:27:22.555323] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:42.954 [2024-09-30 23:27:22.557171] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:42.954 [2024-09-30 23:27:22.557271] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:09:42.954 23:27:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:42.954 23:27:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:42.954 23:27:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:42.954 23:27:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:42.954 23:27:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:42.954 23:27:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:42.954 23:27:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:42.954 23:27:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:42.954 23:27:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:42.954 23:27:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:42.954 23:27:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:42.954 23:27:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:42.954 23:27:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:42.954 23:27:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:42.954 23:27:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:42.954 23:27:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:42.954 23:27:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:42.954 "name": "Existed_Raid", 00:09:42.954 "uuid": "f648d330-b132-4496-a041-b8be1d4f5a89", 00:09:42.954 "strip_size_kb": 64, 00:09:42.954 "state": "configuring", 00:09:42.954 "raid_level": "raid0", 00:09:42.954 "superblock": true, 00:09:42.954 "num_base_bdevs": 4, 00:09:42.954 "num_base_bdevs_discovered": 3, 00:09:42.954 "num_base_bdevs_operational": 4, 00:09:42.954 "base_bdevs_list": [ 00:09:42.954 { 00:09:42.954 "name": "BaseBdev1", 00:09:42.954 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:42.954 "is_configured": false, 00:09:42.954 "data_offset": 0, 00:09:42.954 "data_size": 0 00:09:42.954 }, 00:09:42.954 { 00:09:42.954 "name": "BaseBdev2", 00:09:42.954 "uuid": "9a7b6ed3-38e5-40b4-adcc-e9b272adbccb", 00:09:42.954 "is_configured": true, 00:09:42.954 "data_offset": 2048, 00:09:42.954 "data_size": 63488 00:09:42.954 }, 00:09:42.954 { 00:09:42.954 "name": "BaseBdev3", 00:09:42.954 "uuid": "f822e538-fc5c-447e-970a-b8f95453797a", 00:09:42.954 "is_configured": true, 00:09:42.954 "data_offset": 2048, 00:09:42.954 "data_size": 63488 00:09:42.954 }, 00:09:42.954 { 00:09:42.954 "name": "BaseBdev4", 00:09:42.954 "uuid": "047921c3-b321-417b-9d55-022a582ce910", 00:09:42.954 "is_configured": true, 00:09:42.954 "data_offset": 2048, 00:09:42.954 "data_size": 63488 00:09:42.954 } 00:09:42.954 ] 00:09:42.954 }' 00:09:42.954 23:27:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:42.954 23:27:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:43.214 23:27:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:09:43.214 23:27:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:43.214 23:27:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:43.214 [2024-09-30 23:27:22.990471] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:43.214 23:27:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:43.214 23:27:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:43.214 23:27:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:43.214 23:27:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:43.214 23:27:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:43.214 23:27:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:43.214 23:27:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:43.214 23:27:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:43.214 23:27:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:43.214 23:27:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:43.214 23:27:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:43.214 23:27:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:43.214 23:27:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:43.214 23:27:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:43.214 23:27:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:43.214 23:27:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:43.214 23:27:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:43.214 "name": "Existed_Raid", 00:09:43.214 "uuid": "f648d330-b132-4496-a041-b8be1d4f5a89", 00:09:43.214 "strip_size_kb": 64, 00:09:43.214 "state": "configuring", 00:09:43.214 "raid_level": "raid0", 00:09:43.214 "superblock": true, 00:09:43.214 "num_base_bdevs": 4, 00:09:43.214 "num_base_bdevs_discovered": 2, 00:09:43.214 "num_base_bdevs_operational": 4, 00:09:43.214 "base_bdevs_list": [ 00:09:43.214 { 00:09:43.214 "name": "BaseBdev1", 00:09:43.214 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:43.214 "is_configured": false, 00:09:43.214 "data_offset": 0, 00:09:43.214 "data_size": 0 00:09:43.214 }, 00:09:43.214 { 00:09:43.214 "name": null, 00:09:43.214 "uuid": "9a7b6ed3-38e5-40b4-adcc-e9b272adbccb", 00:09:43.214 "is_configured": false, 00:09:43.214 "data_offset": 0, 00:09:43.214 "data_size": 63488 00:09:43.214 }, 00:09:43.214 { 00:09:43.214 "name": "BaseBdev3", 00:09:43.214 "uuid": "f822e538-fc5c-447e-970a-b8f95453797a", 00:09:43.214 "is_configured": true, 00:09:43.214 "data_offset": 2048, 00:09:43.214 "data_size": 63488 00:09:43.214 }, 00:09:43.214 { 00:09:43.214 "name": "BaseBdev4", 00:09:43.214 "uuid": "047921c3-b321-417b-9d55-022a582ce910", 00:09:43.214 "is_configured": true, 00:09:43.214 "data_offset": 2048, 00:09:43.214 "data_size": 63488 00:09:43.214 } 00:09:43.214 ] 00:09:43.214 }' 00:09:43.214 23:27:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:43.214 23:27:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:43.782 23:27:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:43.782 23:27:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:43.782 23:27:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:43.782 23:27:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:43.782 23:27:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:43.782 23:27:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:09:43.782 23:27:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:43.782 23:27:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:43.782 23:27:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:43.782 [2024-09-30 23:27:23.452670] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:43.782 BaseBdev1 00:09:43.782 23:27:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:43.782 23:27:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:09:43.782 23:27:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:09:43.782 23:27:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:43.782 23:27:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:43.782 23:27:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:43.782 23:27:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:43.782 23:27:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:43.782 23:27:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:43.782 23:27:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:43.782 23:27:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:43.782 23:27:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:43.782 23:27:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:43.782 23:27:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:43.782 [ 00:09:43.782 { 00:09:43.782 "name": "BaseBdev1", 00:09:43.782 "aliases": [ 00:09:43.782 "8272cdc1-7d48-4f0b-8f02-2f95b28eb493" 00:09:43.782 ], 00:09:43.782 "product_name": "Malloc disk", 00:09:43.782 "block_size": 512, 00:09:43.782 "num_blocks": 65536, 00:09:43.782 "uuid": "8272cdc1-7d48-4f0b-8f02-2f95b28eb493", 00:09:43.782 "assigned_rate_limits": { 00:09:43.782 "rw_ios_per_sec": 0, 00:09:43.782 "rw_mbytes_per_sec": 0, 00:09:43.782 "r_mbytes_per_sec": 0, 00:09:43.782 "w_mbytes_per_sec": 0 00:09:43.782 }, 00:09:43.782 "claimed": true, 00:09:43.782 "claim_type": "exclusive_write", 00:09:43.782 "zoned": false, 00:09:43.782 "supported_io_types": { 00:09:43.782 "read": true, 00:09:43.782 "write": true, 00:09:43.782 "unmap": true, 00:09:43.782 "flush": true, 00:09:43.782 "reset": true, 00:09:43.782 "nvme_admin": false, 00:09:43.782 "nvme_io": false, 00:09:43.782 "nvme_io_md": false, 00:09:43.782 "write_zeroes": true, 00:09:43.782 "zcopy": true, 00:09:43.782 "get_zone_info": false, 00:09:43.782 "zone_management": false, 00:09:43.782 "zone_append": false, 00:09:43.782 "compare": false, 00:09:43.782 "compare_and_write": false, 00:09:43.782 "abort": true, 00:09:43.782 "seek_hole": false, 00:09:43.782 "seek_data": false, 00:09:43.782 "copy": true, 00:09:43.782 "nvme_iov_md": false 00:09:43.782 }, 00:09:43.782 "memory_domains": [ 00:09:43.782 { 00:09:43.782 "dma_device_id": "system", 00:09:43.782 "dma_device_type": 1 00:09:43.782 }, 00:09:43.782 { 00:09:43.782 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:43.782 "dma_device_type": 2 00:09:43.782 } 00:09:43.782 ], 00:09:43.782 "driver_specific": {} 00:09:43.782 } 00:09:43.782 ] 00:09:43.782 23:27:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:43.782 23:27:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:43.782 23:27:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:43.782 23:27:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:43.782 23:27:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:43.782 23:27:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:43.782 23:27:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:43.782 23:27:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:43.782 23:27:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:43.782 23:27:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:43.782 23:27:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:43.782 23:27:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:43.782 23:27:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:43.782 23:27:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:43.782 23:27:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:43.783 23:27:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:43.783 23:27:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:43.783 23:27:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:43.783 "name": "Existed_Raid", 00:09:43.783 "uuid": "f648d330-b132-4496-a041-b8be1d4f5a89", 00:09:43.783 "strip_size_kb": 64, 00:09:43.783 "state": "configuring", 00:09:43.783 "raid_level": "raid0", 00:09:43.783 "superblock": true, 00:09:43.783 "num_base_bdevs": 4, 00:09:43.783 "num_base_bdevs_discovered": 3, 00:09:43.783 "num_base_bdevs_operational": 4, 00:09:43.783 "base_bdevs_list": [ 00:09:43.783 { 00:09:43.783 "name": "BaseBdev1", 00:09:43.783 "uuid": "8272cdc1-7d48-4f0b-8f02-2f95b28eb493", 00:09:43.783 "is_configured": true, 00:09:43.783 "data_offset": 2048, 00:09:43.783 "data_size": 63488 00:09:43.783 }, 00:09:43.783 { 00:09:43.783 "name": null, 00:09:43.783 "uuid": "9a7b6ed3-38e5-40b4-adcc-e9b272adbccb", 00:09:43.783 "is_configured": false, 00:09:43.783 "data_offset": 0, 00:09:43.783 "data_size": 63488 00:09:43.783 }, 00:09:43.783 { 00:09:43.783 "name": "BaseBdev3", 00:09:43.783 "uuid": "f822e538-fc5c-447e-970a-b8f95453797a", 00:09:43.783 "is_configured": true, 00:09:43.783 "data_offset": 2048, 00:09:43.783 "data_size": 63488 00:09:43.783 }, 00:09:43.783 { 00:09:43.783 "name": "BaseBdev4", 00:09:43.783 "uuid": "047921c3-b321-417b-9d55-022a582ce910", 00:09:43.783 "is_configured": true, 00:09:43.783 "data_offset": 2048, 00:09:43.783 "data_size": 63488 00:09:43.783 } 00:09:43.783 ] 00:09:43.783 }' 00:09:43.783 23:27:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:43.783 23:27:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:44.352 23:27:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:44.352 23:27:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:44.352 23:27:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:44.352 23:27:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:44.352 23:27:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:44.352 23:27:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:09:44.352 23:27:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:09:44.352 23:27:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:44.352 23:27:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:44.352 [2024-09-30 23:27:24.011794] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:44.352 23:27:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:44.352 23:27:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:44.352 23:27:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:44.352 23:27:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:44.352 23:27:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:44.352 23:27:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:44.352 23:27:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:44.352 23:27:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:44.352 23:27:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:44.352 23:27:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:44.352 23:27:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:44.352 23:27:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:44.352 23:27:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:44.352 23:27:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:44.352 23:27:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:44.352 23:27:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:44.352 23:27:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:44.352 "name": "Existed_Raid", 00:09:44.352 "uuid": "f648d330-b132-4496-a041-b8be1d4f5a89", 00:09:44.352 "strip_size_kb": 64, 00:09:44.352 "state": "configuring", 00:09:44.352 "raid_level": "raid0", 00:09:44.352 "superblock": true, 00:09:44.352 "num_base_bdevs": 4, 00:09:44.352 "num_base_bdevs_discovered": 2, 00:09:44.352 "num_base_bdevs_operational": 4, 00:09:44.352 "base_bdevs_list": [ 00:09:44.352 { 00:09:44.352 "name": "BaseBdev1", 00:09:44.352 "uuid": "8272cdc1-7d48-4f0b-8f02-2f95b28eb493", 00:09:44.352 "is_configured": true, 00:09:44.352 "data_offset": 2048, 00:09:44.352 "data_size": 63488 00:09:44.352 }, 00:09:44.352 { 00:09:44.352 "name": null, 00:09:44.352 "uuid": "9a7b6ed3-38e5-40b4-adcc-e9b272adbccb", 00:09:44.352 "is_configured": false, 00:09:44.352 "data_offset": 0, 00:09:44.352 "data_size": 63488 00:09:44.352 }, 00:09:44.352 { 00:09:44.352 "name": null, 00:09:44.352 "uuid": "f822e538-fc5c-447e-970a-b8f95453797a", 00:09:44.352 "is_configured": false, 00:09:44.352 "data_offset": 0, 00:09:44.352 "data_size": 63488 00:09:44.352 }, 00:09:44.352 { 00:09:44.352 "name": "BaseBdev4", 00:09:44.352 "uuid": "047921c3-b321-417b-9d55-022a582ce910", 00:09:44.352 "is_configured": true, 00:09:44.352 "data_offset": 2048, 00:09:44.352 "data_size": 63488 00:09:44.352 } 00:09:44.352 ] 00:09:44.352 }' 00:09:44.352 23:27:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:44.352 23:27:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:44.612 23:27:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:44.612 23:27:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:44.612 23:27:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:44.612 23:27:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:44.872 23:27:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:44.872 23:27:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:09:44.872 23:27:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:09:44.872 23:27:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:44.872 23:27:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:44.872 [2024-09-30 23:27:24.499048] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:44.872 23:27:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:44.872 23:27:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:44.872 23:27:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:44.872 23:27:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:44.872 23:27:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:44.872 23:27:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:44.872 23:27:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:44.872 23:27:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:44.872 23:27:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:44.872 23:27:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:44.872 23:27:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:44.872 23:27:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:44.872 23:27:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:44.872 23:27:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:44.872 23:27:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:44.872 23:27:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:44.872 23:27:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:44.872 "name": "Existed_Raid", 00:09:44.872 "uuid": "f648d330-b132-4496-a041-b8be1d4f5a89", 00:09:44.872 "strip_size_kb": 64, 00:09:44.872 "state": "configuring", 00:09:44.872 "raid_level": "raid0", 00:09:44.872 "superblock": true, 00:09:44.872 "num_base_bdevs": 4, 00:09:44.872 "num_base_bdevs_discovered": 3, 00:09:44.872 "num_base_bdevs_operational": 4, 00:09:44.872 "base_bdevs_list": [ 00:09:44.872 { 00:09:44.872 "name": "BaseBdev1", 00:09:44.872 "uuid": "8272cdc1-7d48-4f0b-8f02-2f95b28eb493", 00:09:44.872 "is_configured": true, 00:09:44.872 "data_offset": 2048, 00:09:44.872 "data_size": 63488 00:09:44.872 }, 00:09:44.872 { 00:09:44.872 "name": null, 00:09:44.872 "uuid": "9a7b6ed3-38e5-40b4-adcc-e9b272adbccb", 00:09:44.872 "is_configured": false, 00:09:44.872 "data_offset": 0, 00:09:44.872 "data_size": 63488 00:09:44.872 }, 00:09:44.872 { 00:09:44.872 "name": "BaseBdev3", 00:09:44.872 "uuid": "f822e538-fc5c-447e-970a-b8f95453797a", 00:09:44.872 "is_configured": true, 00:09:44.872 "data_offset": 2048, 00:09:44.872 "data_size": 63488 00:09:44.872 }, 00:09:44.872 { 00:09:44.872 "name": "BaseBdev4", 00:09:44.872 "uuid": "047921c3-b321-417b-9d55-022a582ce910", 00:09:44.872 "is_configured": true, 00:09:44.872 "data_offset": 2048, 00:09:44.872 "data_size": 63488 00:09:44.872 } 00:09:44.872 ] 00:09:44.872 }' 00:09:44.872 23:27:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:44.872 23:27:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:45.131 23:27:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:45.131 23:27:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:45.131 23:27:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:45.131 23:27:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:45.131 23:27:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:45.131 23:27:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:09:45.131 23:27:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:45.131 23:27:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:45.131 23:27:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:45.131 [2024-09-30 23:27:24.918352] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:45.131 23:27:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:45.131 23:27:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:45.131 23:27:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:45.131 23:27:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:45.131 23:27:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:45.131 23:27:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:45.131 23:27:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:45.131 23:27:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:45.131 23:27:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:45.131 23:27:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:45.131 23:27:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:45.131 23:27:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:45.131 23:27:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:45.131 23:27:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:45.131 23:27:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:45.131 23:27:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:45.391 23:27:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:45.391 "name": "Existed_Raid", 00:09:45.391 "uuid": "f648d330-b132-4496-a041-b8be1d4f5a89", 00:09:45.391 "strip_size_kb": 64, 00:09:45.391 "state": "configuring", 00:09:45.391 "raid_level": "raid0", 00:09:45.391 "superblock": true, 00:09:45.391 "num_base_bdevs": 4, 00:09:45.391 "num_base_bdevs_discovered": 2, 00:09:45.391 "num_base_bdevs_operational": 4, 00:09:45.391 "base_bdevs_list": [ 00:09:45.391 { 00:09:45.391 "name": null, 00:09:45.391 "uuid": "8272cdc1-7d48-4f0b-8f02-2f95b28eb493", 00:09:45.391 "is_configured": false, 00:09:45.391 "data_offset": 0, 00:09:45.391 "data_size": 63488 00:09:45.391 }, 00:09:45.391 { 00:09:45.391 "name": null, 00:09:45.391 "uuid": "9a7b6ed3-38e5-40b4-adcc-e9b272adbccb", 00:09:45.391 "is_configured": false, 00:09:45.391 "data_offset": 0, 00:09:45.391 "data_size": 63488 00:09:45.391 }, 00:09:45.391 { 00:09:45.391 "name": "BaseBdev3", 00:09:45.391 "uuid": "f822e538-fc5c-447e-970a-b8f95453797a", 00:09:45.391 "is_configured": true, 00:09:45.391 "data_offset": 2048, 00:09:45.391 "data_size": 63488 00:09:45.391 }, 00:09:45.391 { 00:09:45.391 "name": "BaseBdev4", 00:09:45.391 "uuid": "047921c3-b321-417b-9d55-022a582ce910", 00:09:45.391 "is_configured": true, 00:09:45.391 "data_offset": 2048, 00:09:45.391 "data_size": 63488 00:09:45.391 } 00:09:45.391 ] 00:09:45.391 }' 00:09:45.391 23:27:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:45.391 23:27:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:45.650 23:27:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:45.650 23:27:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:45.650 23:27:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:45.650 23:27:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:45.650 23:27:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:45.650 23:27:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:09:45.650 23:27:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:09:45.650 23:27:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:45.650 23:27:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:45.650 [2024-09-30 23:27:25.439846] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:45.650 23:27:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:45.650 23:27:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:45.650 23:27:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:45.650 23:27:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:45.650 23:27:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:45.650 23:27:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:45.650 23:27:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:45.650 23:27:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:45.650 23:27:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:45.650 23:27:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:45.650 23:27:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:45.650 23:27:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:45.650 23:27:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:45.650 23:27:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:45.650 23:27:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:45.650 23:27:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:45.650 23:27:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:45.650 "name": "Existed_Raid", 00:09:45.650 "uuid": "f648d330-b132-4496-a041-b8be1d4f5a89", 00:09:45.650 "strip_size_kb": 64, 00:09:45.650 "state": "configuring", 00:09:45.650 "raid_level": "raid0", 00:09:45.650 "superblock": true, 00:09:45.650 "num_base_bdevs": 4, 00:09:45.651 "num_base_bdevs_discovered": 3, 00:09:45.651 "num_base_bdevs_operational": 4, 00:09:45.651 "base_bdevs_list": [ 00:09:45.651 { 00:09:45.651 "name": null, 00:09:45.651 "uuid": "8272cdc1-7d48-4f0b-8f02-2f95b28eb493", 00:09:45.651 "is_configured": false, 00:09:45.651 "data_offset": 0, 00:09:45.651 "data_size": 63488 00:09:45.651 }, 00:09:45.651 { 00:09:45.651 "name": "BaseBdev2", 00:09:45.651 "uuid": "9a7b6ed3-38e5-40b4-adcc-e9b272adbccb", 00:09:45.651 "is_configured": true, 00:09:45.651 "data_offset": 2048, 00:09:45.651 "data_size": 63488 00:09:45.651 }, 00:09:45.651 { 00:09:45.651 "name": "BaseBdev3", 00:09:45.651 "uuid": "f822e538-fc5c-447e-970a-b8f95453797a", 00:09:45.651 "is_configured": true, 00:09:45.651 "data_offset": 2048, 00:09:45.651 "data_size": 63488 00:09:45.651 }, 00:09:45.651 { 00:09:45.651 "name": "BaseBdev4", 00:09:45.651 "uuid": "047921c3-b321-417b-9d55-022a582ce910", 00:09:45.651 "is_configured": true, 00:09:45.651 "data_offset": 2048, 00:09:45.651 "data_size": 63488 00:09:45.651 } 00:09:45.651 ] 00:09:45.651 }' 00:09:45.651 23:27:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:45.651 23:27:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:46.219 23:27:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:46.219 23:27:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:46.219 23:27:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:46.219 23:27:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:46.219 23:27:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:46.219 23:27:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:09:46.219 23:27:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:09:46.219 23:27:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:46.219 23:27:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:46.219 23:27:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:46.219 23:27:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:46.219 23:27:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 8272cdc1-7d48-4f0b-8f02-2f95b28eb493 00:09:46.219 23:27:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:46.219 23:27:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:46.219 [2024-09-30 23:27:25.929905] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:09:46.219 [2024-09-30 23:27:25.930153] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:09:46.219 [2024-09-30 23:27:25.930215] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:09:46.219 [2024-09-30 23:27:25.930492] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:09:46.219 NewBaseBdev 00:09:46.219 [2024-09-30 23:27:25.930655] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:09:46.219 [2024-09-30 23:27:25.930700] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006d00 00:09:46.219 [2024-09-30 23:27:25.930834] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:46.219 23:27:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:46.219 23:27:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:09:46.219 23:27:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:09:46.219 23:27:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:46.219 23:27:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:46.219 23:27:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:46.219 23:27:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:46.219 23:27:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:46.219 23:27:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:46.219 23:27:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:46.219 23:27:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:46.219 23:27:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:09:46.219 23:27:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:46.219 23:27:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:46.219 [ 00:09:46.219 { 00:09:46.219 "name": "NewBaseBdev", 00:09:46.219 "aliases": [ 00:09:46.219 "8272cdc1-7d48-4f0b-8f02-2f95b28eb493" 00:09:46.219 ], 00:09:46.219 "product_name": "Malloc disk", 00:09:46.219 "block_size": 512, 00:09:46.219 "num_blocks": 65536, 00:09:46.219 "uuid": "8272cdc1-7d48-4f0b-8f02-2f95b28eb493", 00:09:46.219 "assigned_rate_limits": { 00:09:46.219 "rw_ios_per_sec": 0, 00:09:46.219 "rw_mbytes_per_sec": 0, 00:09:46.219 "r_mbytes_per_sec": 0, 00:09:46.219 "w_mbytes_per_sec": 0 00:09:46.219 }, 00:09:46.219 "claimed": true, 00:09:46.219 "claim_type": "exclusive_write", 00:09:46.219 "zoned": false, 00:09:46.219 "supported_io_types": { 00:09:46.219 "read": true, 00:09:46.219 "write": true, 00:09:46.219 "unmap": true, 00:09:46.219 "flush": true, 00:09:46.219 "reset": true, 00:09:46.219 "nvme_admin": false, 00:09:46.219 "nvme_io": false, 00:09:46.219 "nvme_io_md": false, 00:09:46.219 "write_zeroes": true, 00:09:46.219 "zcopy": true, 00:09:46.219 "get_zone_info": false, 00:09:46.219 "zone_management": false, 00:09:46.219 "zone_append": false, 00:09:46.219 "compare": false, 00:09:46.219 "compare_and_write": false, 00:09:46.219 "abort": true, 00:09:46.219 "seek_hole": false, 00:09:46.219 "seek_data": false, 00:09:46.219 "copy": true, 00:09:46.219 "nvme_iov_md": false 00:09:46.219 }, 00:09:46.219 "memory_domains": [ 00:09:46.219 { 00:09:46.219 "dma_device_id": "system", 00:09:46.219 "dma_device_type": 1 00:09:46.219 }, 00:09:46.219 { 00:09:46.219 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:46.219 "dma_device_type": 2 00:09:46.219 } 00:09:46.219 ], 00:09:46.219 "driver_specific": {} 00:09:46.219 } 00:09:46.219 ] 00:09:46.219 23:27:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:46.219 23:27:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:46.219 23:27:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:09:46.219 23:27:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:46.219 23:27:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:46.219 23:27:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:46.219 23:27:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:46.219 23:27:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:46.219 23:27:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:46.219 23:27:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:46.219 23:27:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:46.219 23:27:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:46.219 23:27:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:46.219 23:27:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:46.219 23:27:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:46.219 23:27:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:46.219 23:27:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:46.219 23:27:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:46.219 "name": "Existed_Raid", 00:09:46.219 "uuid": "f648d330-b132-4496-a041-b8be1d4f5a89", 00:09:46.219 "strip_size_kb": 64, 00:09:46.219 "state": "online", 00:09:46.219 "raid_level": "raid0", 00:09:46.219 "superblock": true, 00:09:46.219 "num_base_bdevs": 4, 00:09:46.219 "num_base_bdevs_discovered": 4, 00:09:46.219 "num_base_bdevs_operational": 4, 00:09:46.219 "base_bdevs_list": [ 00:09:46.219 { 00:09:46.219 "name": "NewBaseBdev", 00:09:46.219 "uuid": "8272cdc1-7d48-4f0b-8f02-2f95b28eb493", 00:09:46.219 "is_configured": true, 00:09:46.219 "data_offset": 2048, 00:09:46.219 "data_size": 63488 00:09:46.219 }, 00:09:46.219 { 00:09:46.219 "name": "BaseBdev2", 00:09:46.219 "uuid": "9a7b6ed3-38e5-40b4-adcc-e9b272adbccb", 00:09:46.219 "is_configured": true, 00:09:46.219 "data_offset": 2048, 00:09:46.219 "data_size": 63488 00:09:46.219 }, 00:09:46.219 { 00:09:46.219 "name": "BaseBdev3", 00:09:46.219 "uuid": "f822e538-fc5c-447e-970a-b8f95453797a", 00:09:46.219 "is_configured": true, 00:09:46.219 "data_offset": 2048, 00:09:46.219 "data_size": 63488 00:09:46.219 }, 00:09:46.219 { 00:09:46.219 "name": "BaseBdev4", 00:09:46.219 "uuid": "047921c3-b321-417b-9d55-022a582ce910", 00:09:46.219 "is_configured": true, 00:09:46.219 "data_offset": 2048, 00:09:46.219 "data_size": 63488 00:09:46.219 } 00:09:46.219 ] 00:09:46.219 }' 00:09:46.219 23:27:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:46.219 23:27:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:46.479 23:27:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:09:46.479 23:27:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:46.479 23:27:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:46.479 23:27:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:46.479 23:27:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:09:46.479 23:27:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:46.479 23:27:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:46.479 23:27:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:46.479 23:27:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:46.479 23:27:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:46.479 [2024-09-30 23:27:26.321541] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:46.739 23:27:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:46.739 23:27:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:46.739 "name": "Existed_Raid", 00:09:46.739 "aliases": [ 00:09:46.739 "f648d330-b132-4496-a041-b8be1d4f5a89" 00:09:46.739 ], 00:09:46.739 "product_name": "Raid Volume", 00:09:46.739 "block_size": 512, 00:09:46.739 "num_blocks": 253952, 00:09:46.739 "uuid": "f648d330-b132-4496-a041-b8be1d4f5a89", 00:09:46.739 "assigned_rate_limits": { 00:09:46.739 "rw_ios_per_sec": 0, 00:09:46.739 "rw_mbytes_per_sec": 0, 00:09:46.739 "r_mbytes_per_sec": 0, 00:09:46.739 "w_mbytes_per_sec": 0 00:09:46.739 }, 00:09:46.739 "claimed": false, 00:09:46.739 "zoned": false, 00:09:46.739 "supported_io_types": { 00:09:46.739 "read": true, 00:09:46.739 "write": true, 00:09:46.739 "unmap": true, 00:09:46.739 "flush": true, 00:09:46.739 "reset": true, 00:09:46.739 "nvme_admin": false, 00:09:46.739 "nvme_io": false, 00:09:46.739 "nvme_io_md": false, 00:09:46.739 "write_zeroes": true, 00:09:46.739 "zcopy": false, 00:09:46.739 "get_zone_info": false, 00:09:46.739 "zone_management": false, 00:09:46.739 "zone_append": false, 00:09:46.739 "compare": false, 00:09:46.739 "compare_and_write": false, 00:09:46.739 "abort": false, 00:09:46.739 "seek_hole": false, 00:09:46.739 "seek_data": false, 00:09:46.739 "copy": false, 00:09:46.739 "nvme_iov_md": false 00:09:46.739 }, 00:09:46.739 "memory_domains": [ 00:09:46.739 { 00:09:46.739 "dma_device_id": "system", 00:09:46.739 "dma_device_type": 1 00:09:46.739 }, 00:09:46.739 { 00:09:46.739 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:46.739 "dma_device_type": 2 00:09:46.739 }, 00:09:46.739 { 00:09:46.739 "dma_device_id": "system", 00:09:46.739 "dma_device_type": 1 00:09:46.739 }, 00:09:46.739 { 00:09:46.739 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:46.739 "dma_device_type": 2 00:09:46.739 }, 00:09:46.739 { 00:09:46.739 "dma_device_id": "system", 00:09:46.739 "dma_device_type": 1 00:09:46.739 }, 00:09:46.739 { 00:09:46.739 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:46.739 "dma_device_type": 2 00:09:46.739 }, 00:09:46.739 { 00:09:46.739 "dma_device_id": "system", 00:09:46.739 "dma_device_type": 1 00:09:46.739 }, 00:09:46.739 { 00:09:46.739 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:46.739 "dma_device_type": 2 00:09:46.739 } 00:09:46.739 ], 00:09:46.739 "driver_specific": { 00:09:46.739 "raid": { 00:09:46.739 "uuid": "f648d330-b132-4496-a041-b8be1d4f5a89", 00:09:46.739 "strip_size_kb": 64, 00:09:46.739 "state": "online", 00:09:46.739 "raid_level": "raid0", 00:09:46.739 "superblock": true, 00:09:46.739 "num_base_bdevs": 4, 00:09:46.739 "num_base_bdevs_discovered": 4, 00:09:46.739 "num_base_bdevs_operational": 4, 00:09:46.739 "base_bdevs_list": [ 00:09:46.739 { 00:09:46.739 "name": "NewBaseBdev", 00:09:46.739 "uuid": "8272cdc1-7d48-4f0b-8f02-2f95b28eb493", 00:09:46.739 "is_configured": true, 00:09:46.739 "data_offset": 2048, 00:09:46.739 "data_size": 63488 00:09:46.739 }, 00:09:46.739 { 00:09:46.739 "name": "BaseBdev2", 00:09:46.739 "uuid": "9a7b6ed3-38e5-40b4-adcc-e9b272adbccb", 00:09:46.739 "is_configured": true, 00:09:46.739 "data_offset": 2048, 00:09:46.739 "data_size": 63488 00:09:46.739 }, 00:09:46.739 { 00:09:46.739 "name": "BaseBdev3", 00:09:46.739 "uuid": "f822e538-fc5c-447e-970a-b8f95453797a", 00:09:46.739 "is_configured": true, 00:09:46.739 "data_offset": 2048, 00:09:46.739 "data_size": 63488 00:09:46.739 }, 00:09:46.739 { 00:09:46.739 "name": "BaseBdev4", 00:09:46.739 "uuid": "047921c3-b321-417b-9d55-022a582ce910", 00:09:46.739 "is_configured": true, 00:09:46.739 "data_offset": 2048, 00:09:46.739 "data_size": 63488 00:09:46.739 } 00:09:46.739 ] 00:09:46.739 } 00:09:46.739 } 00:09:46.739 }' 00:09:46.739 23:27:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:46.739 23:27:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:09:46.739 BaseBdev2 00:09:46.739 BaseBdev3 00:09:46.739 BaseBdev4' 00:09:46.739 23:27:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:46.739 23:27:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:46.739 23:27:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:46.739 23:27:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:09:46.739 23:27:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:46.739 23:27:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:46.739 23:27:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:46.739 23:27:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:46.739 23:27:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:46.739 23:27:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:46.739 23:27:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:46.739 23:27:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:46.739 23:27:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:46.739 23:27:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:46.739 23:27:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:46.739 23:27:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:46.739 23:27:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:46.739 23:27:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:46.739 23:27:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:46.739 23:27:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:46.739 23:27:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:46.739 23:27:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:46.739 23:27:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:46.739 23:27:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:46.739 23:27:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:46.739 23:27:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:46.739 23:27:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:46.739 23:27:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:09:46.739 23:27:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:46.739 23:27:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:46.739 23:27:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:46.739 23:27:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:46.999 23:27:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:46.999 23:27:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:46.999 23:27:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:46.999 23:27:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:46.999 23:27:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:46.999 [2024-09-30 23:27:26.632731] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:46.999 [2024-09-30 23:27:26.632802] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:46.999 [2024-09-30 23:27:26.632884] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:46.999 [2024-09-30 23:27:26.632977] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:46.999 [2024-09-30 23:27:26.632987] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name Existed_Raid, state offline 00:09:46.999 23:27:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:46.999 23:27:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 81056 00:09:46.999 23:27:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 81056 ']' 00:09:46.999 23:27:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 81056 00:09:46.999 23:27:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:09:46.999 23:27:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:46.999 23:27:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 81056 00:09:46.999 23:27:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:46.999 killing process with pid 81056 00:09:46.999 23:27:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:46.999 23:27:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 81056' 00:09:46.999 23:27:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 81056 00:09:46.999 [2024-09-30 23:27:26.682096] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:46.999 23:27:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 81056 00:09:46.999 [2024-09-30 23:27:26.723651] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:47.258 ************************************ 00:09:47.258 END TEST raid_state_function_test_sb 00:09:47.258 ************************************ 00:09:47.258 23:27:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:09:47.258 00:09:47.258 real 0m9.382s 00:09:47.258 user 0m16.030s 00:09:47.258 sys 0m1.949s 00:09:47.258 23:27:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:47.258 23:27:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:47.258 23:27:27 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 4 00:09:47.258 23:27:27 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:09:47.258 23:27:27 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:47.258 23:27:27 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:47.258 ************************************ 00:09:47.258 START TEST raid_superblock_test 00:09:47.258 ************************************ 00:09:47.258 23:27:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test raid0 4 00:09:47.258 23:27:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:09:47.258 23:27:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:09:47.258 23:27:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:09:47.258 23:27:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:09:47.258 23:27:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:09:47.258 23:27:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:09:47.258 23:27:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:09:47.258 23:27:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:09:47.258 23:27:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:09:47.258 23:27:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:09:47.258 23:27:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:09:47.258 23:27:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:09:47.258 23:27:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:09:47.258 23:27:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:09:47.258 23:27:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:09:47.258 23:27:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:09:47.258 23:27:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=81704 00:09:47.258 23:27:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 81704 00:09:47.258 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:47.258 23:27:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 81704 ']' 00:09:47.258 23:27:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:09:47.258 23:27:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:47.258 23:27:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:47.258 23:27:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:47.258 23:27:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:47.258 23:27:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.517 [2024-09-30 23:27:27.116750] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:09:47.517 [2024-09-30 23:27:27.116984] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81704 ] 00:09:47.517 [2024-09-30 23:27:27.265197] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:47.517 [2024-09-30 23:27:27.309467] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:47.517 [2024-09-30 23:27:27.351155] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:47.518 [2024-09-30 23:27:27.351283] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:48.086 23:27:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:48.086 23:27:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:09:48.086 23:27:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:09:48.086 23:27:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:48.086 23:27:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:09:48.086 23:27:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:09:48.086 23:27:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:09:48.086 23:27:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:48.086 23:27:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:48.086 23:27:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:48.086 23:27:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:09:48.086 23:27:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:48.086 23:27:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.345 malloc1 00:09:48.345 23:27:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:48.345 23:27:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:48.345 23:27:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:48.345 23:27:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.345 [2024-09-30 23:27:27.957203] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:48.345 [2024-09-30 23:27:27.957334] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:48.345 [2024-09-30 23:27:27.957377] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:09:48.345 [2024-09-30 23:27:27.957413] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:48.345 [2024-09-30 23:27:27.959504] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:48.345 [2024-09-30 23:27:27.959583] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:48.345 pt1 00:09:48.345 23:27:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:48.345 23:27:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:48.345 23:27:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:48.345 23:27:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:09:48.345 23:27:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:09:48.345 23:27:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:09:48.345 23:27:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:48.345 23:27:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:48.345 23:27:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:48.345 23:27:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:09:48.345 23:27:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:48.345 23:27:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.345 malloc2 00:09:48.345 23:27:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:48.345 23:27:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:48.345 23:27:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:48.345 23:27:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.345 [2024-09-30 23:27:27.997707] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:48.345 [2024-09-30 23:27:27.997802] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:48.345 [2024-09-30 23:27:27.997838] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:09:48.345 [2024-09-30 23:27:27.997886] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:48.345 [2024-09-30 23:27:28.000112] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:48.345 [2024-09-30 23:27:28.000192] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:48.345 pt2 00:09:48.345 23:27:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:48.346 23:27:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:48.346 23:27:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:48.346 23:27:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:09:48.346 23:27:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:09:48.346 23:27:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:09:48.346 23:27:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:48.346 23:27:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:48.346 23:27:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:48.346 23:27:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:09:48.346 23:27:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:48.346 23:27:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.346 malloc3 00:09:48.346 23:27:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:48.346 23:27:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:48.346 23:27:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:48.346 23:27:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.346 [2024-09-30 23:27:28.026265] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:48.346 [2024-09-30 23:27:28.026354] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:48.346 [2024-09-30 23:27:28.026388] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:09:48.346 [2024-09-30 23:27:28.026417] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:48.346 [2024-09-30 23:27:28.028438] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:48.346 [2024-09-30 23:27:28.028511] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:48.346 pt3 00:09:48.346 23:27:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:48.346 23:27:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:48.346 23:27:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:48.346 23:27:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:09:48.346 23:27:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:09:48.346 23:27:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:09:48.346 23:27:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:48.346 23:27:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:48.346 23:27:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:48.346 23:27:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:09:48.346 23:27:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:48.346 23:27:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.346 malloc4 00:09:48.346 23:27:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:48.346 23:27:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:09:48.346 23:27:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:48.346 23:27:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.346 [2024-09-30 23:27:28.058708] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:09:48.346 [2024-09-30 23:27:28.058813] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:48.346 [2024-09-30 23:27:28.058845] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:09:48.346 [2024-09-30 23:27:28.058890] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:48.346 [2024-09-30 23:27:28.060934] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:48.346 [2024-09-30 23:27:28.060972] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:09:48.346 pt4 00:09:48.346 23:27:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:48.346 23:27:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:48.346 23:27:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:48.346 23:27:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:09:48.346 23:27:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:48.346 23:27:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.346 [2024-09-30 23:27:28.070760] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:48.346 [2024-09-30 23:27:28.072547] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:48.346 [2024-09-30 23:27:28.072604] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:48.346 [2024-09-30 23:27:28.072674] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:09:48.346 [2024-09-30 23:27:28.072821] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:09:48.346 [2024-09-30 23:27:28.072834] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:09:48.346 [2024-09-30 23:27:28.073097] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:09:48.346 [2024-09-30 23:27:28.073251] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:09:48.346 [2024-09-30 23:27:28.073262] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:09:48.346 [2024-09-30 23:27:28.073381] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:48.346 23:27:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:48.346 23:27:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:09:48.346 23:27:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:48.346 23:27:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:48.346 23:27:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:48.346 23:27:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:48.346 23:27:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:48.346 23:27:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:48.346 23:27:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:48.346 23:27:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:48.346 23:27:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:48.346 23:27:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:48.346 23:27:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:48.346 23:27:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:48.346 23:27:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.346 23:27:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:48.346 23:27:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:48.346 "name": "raid_bdev1", 00:09:48.346 "uuid": "40f4eca8-7c24-4b8e-8213-e8aa3a9d4c87", 00:09:48.346 "strip_size_kb": 64, 00:09:48.346 "state": "online", 00:09:48.346 "raid_level": "raid0", 00:09:48.346 "superblock": true, 00:09:48.346 "num_base_bdevs": 4, 00:09:48.346 "num_base_bdevs_discovered": 4, 00:09:48.346 "num_base_bdevs_operational": 4, 00:09:48.346 "base_bdevs_list": [ 00:09:48.346 { 00:09:48.346 "name": "pt1", 00:09:48.346 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:48.346 "is_configured": true, 00:09:48.346 "data_offset": 2048, 00:09:48.346 "data_size": 63488 00:09:48.346 }, 00:09:48.346 { 00:09:48.346 "name": "pt2", 00:09:48.346 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:48.346 "is_configured": true, 00:09:48.346 "data_offset": 2048, 00:09:48.346 "data_size": 63488 00:09:48.346 }, 00:09:48.346 { 00:09:48.346 "name": "pt3", 00:09:48.346 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:48.346 "is_configured": true, 00:09:48.346 "data_offset": 2048, 00:09:48.346 "data_size": 63488 00:09:48.346 }, 00:09:48.346 { 00:09:48.346 "name": "pt4", 00:09:48.346 "uuid": "00000000-0000-0000-0000-000000000004", 00:09:48.346 "is_configured": true, 00:09:48.346 "data_offset": 2048, 00:09:48.346 "data_size": 63488 00:09:48.346 } 00:09:48.346 ] 00:09:48.346 }' 00:09:48.346 23:27:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:48.346 23:27:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.914 23:27:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:09:48.914 23:27:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:09:48.914 23:27:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:48.914 23:27:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:48.914 23:27:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:48.914 23:27:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:48.914 23:27:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:48.914 23:27:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:48.914 23:27:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:48.914 23:27:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.914 [2024-09-30 23:27:28.526223] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:48.914 23:27:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:48.914 23:27:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:48.914 "name": "raid_bdev1", 00:09:48.914 "aliases": [ 00:09:48.914 "40f4eca8-7c24-4b8e-8213-e8aa3a9d4c87" 00:09:48.914 ], 00:09:48.914 "product_name": "Raid Volume", 00:09:48.914 "block_size": 512, 00:09:48.914 "num_blocks": 253952, 00:09:48.914 "uuid": "40f4eca8-7c24-4b8e-8213-e8aa3a9d4c87", 00:09:48.914 "assigned_rate_limits": { 00:09:48.914 "rw_ios_per_sec": 0, 00:09:48.914 "rw_mbytes_per_sec": 0, 00:09:48.914 "r_mbytes_per_sec": 0, 00:09:48.914 "w_mbytes_per_sec": 0 00:09:48.914 }, 00:09:48.914 "claimed": false, 00:09:48.914 "zoned": false, 00:09:48.914 "supported_io_types": { 00:09:48.914 "read": true, 00:09:48.914 "write": true, 00:09:48.914 "unmap": true, 00:09:48.914 "flush": true, 00:09:48.914 "reset": true, 00:09:48.914 "nvme_admin": false, 00:09:48.914 "nvme_io": false, 00:09:48.914 "nvme_io_md": false, 00:09:48.914 "write_zeroes": true, 00:09:48.914 "zcopy": false, 00:09:48.914 "get_zone_info": false, 00:09:48.914 "zone_management": false, 00:09:48.914 "zone_append": false, 00:09:48.914 "compare": false, 00:09:48.914 "compare_and_write": false, 00:09:48.914 "abort": false, 00:09:48.914 "seek_hole": false, 00:09:48.914 "seek_data": false, 00:09:48.914 "copy": false, 00:09:48.914 "nvme_iov_md": false 00:09:48.914 }, 00:09:48.914 "memory_domains": [ 00:09:48.914 { 00:09:48.914 "dma_device_id": "system", 00:09:48.914 "dma_device_type": 1 00:09:48.914 }, 00:09:48.914 { 00:09:48.915 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:48.915 "dma_device_type": 2 00:09:48.915 }, 00:09:48.915 { 00:09:48.915 "dma_device_id": "system", 00:09:48.915 "dma_device_type": 1 00:09:48.915 }, 00:09:48.915 { 00:09:48.915 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:48.915 "dma_device_type": 2 00:09:48.915 }, 00:09:48.915 { 00:09:48.915 "dma_device_id": "system", 00:09:48.915 "dma_device_type": 1 00:09:48.915 }, 00:09:48.915 { 00:09:48.915 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:48.915 "dma_device_type": 2 00:09:48.915 }, 00:09:48.915 { 00:09:48.915 "dma_device_id": "system", 00:09:48.915 "dma_device_type": 1 00:09:48.915 }, 00:09:48.915 { 00:09:48.915 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:48.915 "dma_device_type": 2 00:09:48.915 } 00:09:48.915 ], 00:09:48.915 "driver_specific": { 00:09:48.915 "raid": { 00:09:48.915 "uuid": "40f4eca8-7c24-4b8e-8213-e8aa3a9d4c87", 00:09:48.915 "strip_size_kb": 64, 00:09:48.915 "state": "online", 00:09:48.915 "raid_level": "raid0", 00:09:48.915 "superblock": true, 00:09:48.915 "num_base_bdevs": 4, 00:09:48.915 "num_base_bdevs_discovered": 4, 00:09:48.915 "num_base_bdevs_operational": 4, 00:09:48.915 "base_bdevs_list": [ 00:09:48.915 { 00:09:48.915 "name": "pt1", 00:09:48.915 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:48.915 "is_configured": true, 00:09:48.915 "data_offset": 2048, 00:09:48.915 "data_size": 63488 00:09:48.915 }, 00:09:48.915 { 00:09:48.915 "name": "pt2", 00:09:48.915 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:48.915 "is_configured": true, 00:09:48.915 "data_offset": 2048, 00:09:48.915 "data_size": 63488 00:09:48.915 }, 00:09:48.915 { 00:09:48.915 "name": "pt3", 00:09:48.915 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:48.915 "is_configured": true, 00:09:48.915 "data_offset": 2048, 00:09:48.915 "data_size": 63488 00:09:48.915 }, 00:09:48.915 { 00:09:48.915 "name": "pt4", 00:09:48.915 "uuid": "00000000-0000-0000-0000-000000000004", 00:09:48.915 "is_configured": true, 00:09:48.915 "data_offset": 2048, 00:09:48.915 "data_size": 63488 00:09:48.915 } 00:09:48.915 ] 00:09:48.915 } 00:09:48.915 } 00:09:48.915 }' 00:09:48.915 23:27:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:48.915 23:27:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:09:48.915 pt2 00:09:48.915 pt3 00:09:48.915 pt4' 00:09:48.915 23:27:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:48.915 23:27:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:48.915 23:27:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:48.915 23:27:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:09:48.915 23:27:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:48.915 23:27:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.915 23:27:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:48.915 23:27:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:48.915 23:27:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:48.915 23:27:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:48.915 23:27:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:48.915 23:27:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:09:48.915 23:27:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:48.915 23:27:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:48.915 23:27:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.915 23:27:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:48.915 23:27:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:48.915 23:27:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:48.915 23:27:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:48.915 23:27:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:48.915 23:27:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:09:48.915 23:27:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:48.915 23:27:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.174 23:27:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:49.174 23:27:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:49.174 23:27:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:49.174 23:27:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:49.174 23:27:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:49.174 23:27:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:09:49.174 23:27:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:49.174 23:27:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.174 23:27:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:49.174 23:27:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:49.174 23:27:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:49.174 23:27:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:49.174 23:27:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:09:49.174 23:27:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:49.174 23:27:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.174 [2024-09-30 23:27:28.837662] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:49.174 23:27:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:49.174 23:27:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=40f4eca8-7c24-4b8e-8213-e8aa3a9d4c87 00:09:49.174 23:27:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 40f4eca8-7c24-4b8e-8213-e8aa3a9d4c87 ']' 00:09:49.174 23:27:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:49.174 23:27:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:49.174 23:27:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.174 [2024-09-30 23:27:28.881284] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:49.174 [2024-09-30 23:27:28.881358] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:49.174 [2024-09-30 23:27:28.881429] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:49.174 [2024-09-30 23:27:28.881502] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:49.174 [2024-09-30 23:27:28.881513] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:09:49.174 23:27:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:49.174 23:27:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:49.174 23:27:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:49.174 23:27:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.174 23:27:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:09:49.174 23:27:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:49.174 23:27:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:09:49.174 23:27:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:09:49.174 23:27:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:49.174 23:27:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:09:49.174 23:27:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:49.174 23:27:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.174 23:27:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:49.174 23:27:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:49.174 23:27:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:09:49.174 23:27:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:49.174 23:27:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.174 23:27:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:49.174 23:27:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:49.174 23:27:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:09:49.174 23:27:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:49.174 23:27:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.174 23:27:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:49.174 23:27:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:49.174 23:27:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:09:49.174 23:27:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:49.174 23:27:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.174 23:27:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:49.174 23:27:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:09:49.174 23:27:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:09:49.174 23:27:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:49.174 23:27:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.434 23:27:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:49.434 23:27:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:09:49.434 23:27:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:09:49.434 23:27:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:09:49.434 23:27:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:09:49.434 23:27:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:09:49.434 23:27:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:49.434 23:27:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:09:49.434 23:27:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:49.434 23:27:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:09:49.434 23:27:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:49.434 23:27:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.434 [2024-09-30 23:27:29.049034] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:09:49.434 [2024-09-30 23:27:29.050913] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:09:49.434 [2024-09-30 23:27:29.050967] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:09:49.434 [2024-09-30 23:27:29.051012] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:09:49.434 [2024-09-30 23:27:29.051059] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:09:49.434 [2024-09-30 23:27:29.051099] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:09:49.434 [2024-09-30 23:27:29.051117] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:09:49.434 [2024-09-30 23:27:29.051132] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:09:49.434 [2024-09-30 23:27:29.051147] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:49.434 [2024-09-30 23:27:29.051156] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state configuring 00:09:49.434 request: 00:09:49.434 { 00:09:49.434 "name": "raid_bdev1", 00:09:49.434 "raid_level": "raid0", 00:09:49.434 "base_bdevs": [ 00:09:49.434 "malloc1", 00:09:49.434 "malloc2", 00:09:49.434 "malloc3", 00:09:49.434 "malloc4" 00:09:49.434 ], 00:09:49.434 "strip_size_kb": 64, 00:09:49.434 "superblock": false, 00:09:49.434 "method": "bdev_raid_create", 00:09:49.434 "req_id": 1 00:09:49.434 } 00:09:49.434 Got JSON-RPC error response 00:09:49.434 response: 00:09:49.434 { 00:09:49.434 "code": -17, 00:09:49.434 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:09:49.434 } 00:09:49.434 23:27:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:09:49.434 23:27:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:09:49.434 23:27:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:49.434 23:27:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:49.434 23:27:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:49.434 23:27:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:49.434 23:27:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:09:49.434 23:27:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:49.434 23:27:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.434 23:27:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:49.434 23:27:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:09:49.434 23:27:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:09:49.434 23:27:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:49.434 23:27:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:49.434 23:27:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.434 [2024-09-30 23:27:29.116925] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:49.434 [2024-09-30 23:27:29.117023] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:49.434 [2024-09-30 23:27:29.117068] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:09:49.434 [2024-09-30 23:27:29.117123] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:49.434 [2024-09-30 23:27:29.119653] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:49.434 [2024-09-30 23:27:29.119749] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:49.434 [2024-09-30 23:27:29.119868] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:09:49.434 [2024-09-30 23:27:29.119966] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:49.434 pt1 00:09:49.434 23:27:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:49.434 23:27:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:09:49.434 23:27:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:49.434 23:27:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:49.434 23:27:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:49.434 23:27:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:49.434 23:27:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:49.434 23:27:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:49.434 23:27:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:49.434 23:27:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:49.434 23:27:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:49.434 23:27:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:49.434 23:27:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:49.434 23:27:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:49.434 23:27:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.434 23:27:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:49.434 23:27:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:49.434 "name": "raid_bdev1", 00:09:49.434 "uuid": "40f4eca8-7c24-4b8e-8213-e8aa3a9d4c87", 00:09:49.434 "strip_size_kb": 64, 00:09:49.434 "state": "configuring", 00:09:49.434 "raid_level": "raid0", 00:09:49.434 "superblock": true, 00:09:49.434 "num_base_bdevs": 4, 00:09:49.434 "num_base_bdevs_discovered": 1, 00:09:49.434 "num_base_bdevs_operational": 4, 00:09:49.434 "base_bdevs_list": [ 00:09:49.434 { 00:09:49.434 "name": "pt1", 00:09:49.434 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:49.434 "is_configured": true, 00:09:49.434 "data_offset": 2048, 00:09:49.434 "data_size": 63488 00:09:49.434 }, 00:09:49.434 { 00:09:49.434 "name": null, 00:09:49.434 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:49.434 "is_configured": false, 00:09:49.434 "data_offset": 2048, 00:09:49.434 "data_size": 63488 00:09:49.434 }, 00:09:49.434 { 00:09:49.434 "name": null, 00:09:49.434 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:49.434 "is_configured": false, 00:09:49.434 "data_offset": 2048, 00:09:49.434 "data_size": 63488 00:09:49.434 }, 00:09:49.434 { 00:09:49.434 "name": null, 00:09:49.434 "uuid": "00000000-0000-0000-0000-000000000004", 00:09:49.434 "is_configured": false, 00:09:49.434 "data_offset": 2048, 00:09:49.434 "data_size": 63488 00:09:49.434 } 00:09:49.434 ] 00:09:49.434 }' 00:09:49.434 23:27:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:49.434 23:27:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.003 23:27:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:09:50.003 23:27:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:50.003 23:27:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:50.003 23:27:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.003 [2024-09-30 23:27:29.564164] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:50.003 [2024-09-30 23:27:29.564241] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:50.003 [2024-09-30 23:27:29.564262] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:09:50.003 [2024-09-30 23:27:29.564271] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:50.003 [2024-09-30 23:27:29.564690] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:50.003 [2024-09-30 23:27:29.564714] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:50.003 [2024-09-30 23:27:29.564791] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:50.003 [2024-09-30 23:27:29.564821] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:50.003 pt2 00:09:50.003 23:27:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:50.003 23:27:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:09:50.003 23:27:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:50.003 23:27:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.003 [2024-09-30 23:27:29.572141] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:09:50.003 23:27:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:50.003 23:27:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:09:50.003 23:27:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:50.003 23:27:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:50.003 23:27:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:50.003 23:27:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:50.003 23:27:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:50.003 23:27:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:50.003 23:27:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:50.003 23:27:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:50.003 23:27:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:50.003 23:27:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:50.003 23:27:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:50.003 23:27:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:50.003 23:27:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.003 23:27:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:50.003 23:27:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:50.003 "name": "raid_bdev1", 00:09:50.003 "uuid": "40f4eca8-7c24-4b8e-8213-e8aa3a9d4c87", 00:09:50.003 "strip_size_kb": 64, 00:09:50.003 "state": "configuring", 00:09:50.003 "raid_level": "raid0", 00:09:50.003 "superblock": true, 00:09:50.003 "num_base_bdevs": 4, 00:09:50.003 "num_base_bdevs_discovered": 1, 00:09:50.003 "num_base_bdevs_operational": 4, 00:09:50.003 "base_bdevs_list": [ 00:09:50.003 { 00:09:50.003 "name": "pt1", 00:09:50.003 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:50.003 "is_configured": true, 00:09:50.003 "data_offset": 2048, 00:09:50.003 "data_size": 63488 00:09:50.003 }, 00:09:50.003 { 00:09:50.003 "name": null, 00:09:50.003 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:50.003 "is_configured": false, 00:09:50.003 "data_offset": 0, 00:09:50.003 "data_size": 63488 00:09:50.003 }, 00:09:50.003 { 00:09:50.003 "name": null, 00:09:50.003 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:50.003 "is_configured": false, 00:09:50.003 "data_offset": 2048, 00:09:50.003 "data_size": 63488 00:09:50.003 }, 00:09:50.003 { 00:09:50.003 "name": null, 00:09:50.003 "uuid": "00000000-0000-0000-0000-000000000004", 00:09:50.003 "is_configured": false, 00:09:50.003 "data_offset": 2048, 00:09:50.003 "data_size": 63488 00:09:50.003 } 00:09:50.003 ] 00:09:50.003 }' 00:09:50.003 23:27:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:50.003 23:27:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.263 23:27:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:09:50.263 23:27:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:50.263 23:27:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:50.263 23:27:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:50.263 23:27:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.263 [2024-09-30 23:27:30.067285] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:50.263 [2024-09-30 23:27:30.067357] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:50.263 [2024-09-30 23:27:30.067375] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:09:50.263 [2024-09-30 23:27:30.067399] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:50.263 [2024-09-30 23:27:30.067804] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:50.263 [2024-09-30 23:27:30.067834] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:50.263 [2024-09-30 23:27:30.067927] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:50.263 [2024-09-30 23:27:30.067957] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:50.263 pt2 00:09:50.263 23:27:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:50.263 23:27:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:09:50.263 23:27:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:50.263 23:27:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:50.263 23:27:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:50.263 23:27:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.263 [2024-09-30 23:27:30.079210] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:50.263 [2024-09-30 23:27:30.079266] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:50.263 [2024-09-30 23:27:30.079281] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:09:50.263 [2024-09-30 23:27:30.079291] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:50.263 [2024-09-30 23:27:30.079618] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:50.263 [2024-09-30 23:27:30.079642] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:50.263 [2024-09-30 23:27:30.079697] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:09:50.263 [2024-09-30 23:27:30.079716] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:50.263 pt3 00:09:50.263 23:27:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:50.263 23:27:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:09:50.263 23:27:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:50.263 23:27:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:09:50.263 23:27:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:50.263 23:27:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.263 [2024-09-30 23:27:30.091191] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:09:50.263 [2024-09-30 23:27:30.091244] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:50.263 [2024-09-30 23:27:30.091258] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:09:50.263 [2024-09-30 23:27:30.091267] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:50.263 [2024-09-30 23:27:30.091566] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:50.263 [2024-09-30 23:27:30.091590] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:09:50.263 [2024-09-30 23:27:30.091640] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:09:50.263 [2024-09-30 23:27:30.091659] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:09:50.263 [2024-09-30 23:27:30.091750] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:09:50.263 [2024-09-30 23:27:30.091763] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:09:50.263 [2024-09-30 23:27:30.091999] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:09:50.263 [2024-09-30 23:27:30.092120] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:09:50.263 [2024-09-30 23:27:30.092133] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006980 00:09:50.263 [2024-09-30 23:27:30.092230] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:50.263 pt4 00:09:50.263 23:27:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:50.263 23:27:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:09:50.263 23:27:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:50.263 23:27:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:09:50.263 23:27:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:50.263 23:27:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:50.263 23:27:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:50.263 23:27:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:50.263 23:27:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:50.263 23:27:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:50.263 23:27:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:50.263 23:27:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:50.263 23:27:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:50.263 23:27:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:50.263 23:27:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:50.263 23:27:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:50.263 23:27:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.523 23:27:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:50.523 23:27:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:50.523 "name": "raid_bdev1", 00:09:50.523 "uuid": "40f4eca8-7c24-4b8e-8213-e8aa3a9d4c87", 00:09:50.523 "strip_size_kb": 64, 00:09:50.523 "state": "online", 00:09:50.523 "raid_level": "raid0", 00:09:50.523 "superblock": true, 00:09:50.523 "num_base_bdevs": 4, 00:09:50.523 "num_base_bdevs_discovered": 4, 00:09:50.523 "num_base_bdevs_operational": 4, 00:09:50.523 "base_bdevs_list": [ 00:09:50.523 { 00:09:50.523 "name": "pt1", 00:09:50.523 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:50.523 "is_configured": true, 00:09:50.523 "data_offset": 2048, 00:09:50.523 "data_size": 63488 00:09:50.523 }, 00:09:50.523 { 00:09:50.523 "name": "pt2", 00:09:50.523 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:50.523 "is_configured": true, 00:09:50.523 "data_offset": 2048, 00:09:50.523 "data_size": 63488 00:09:50.523 }, 00:09:50.523 { 00:09:50.523 "name": "pt3", 00:09:50.523 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:50.523 "is_configured": true, 00:09:50.523 "data_offset": 2048, 00:09:50.523 "data_size": 63488 00:09:50.523 }, 00:09:50.523 { 00:09:50.523 "name": "pt4", 00:09:50.523 "uuid": "00000000-0000-0000-0000-000000000004", 00:09:50.523 "is_configured": true, 00:09:50.523 "data_offset": 2048, 00:09:50.523 "data_size": 63488 00:09:50.523 } 00:09:50.523 ] 00:09:50.523 }' 00:09:50.523 23:27:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:50.523 23:27:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.783 23:27:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:09:50.783 23:27:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:09:50.783 23:27:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:50.783 23:27:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:50.783 23:27:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:50.783 23:27:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:50.783 23:27:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:50.783 23:27:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:50.783 23:27:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.783 23:27:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:50.783 [2024-09-30 23:27:30.502821] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:50.783 23:27:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:50.783 23:27:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:50.783 "name": "raid_bdev1", 00:09:50.783 "aliases": [ 00:09:50.783 "40f4eca8-7c24-4b8e-8213-e8aa3a9d4c87" 00:09:50.783 ], 00:09:50.783 "product_name": "Raid Volume", 00:09:50.783 "block_size": 512, 00:09:50.783 "num_blocks": 253952, 00:09:50.783 "uuid": "40f4eca8-7c24-4b8e-8213-e8aa3a9d4c87", 00:09:50.783 "assigned_rate_limits": { 00:09:50.783 "rw_ios_per_sec": 0, 00:09:50.783 "rw_mbytes_per_sec": 0, 00:09:50.783 "r_mbytes_per_sec": 0, 00:09:50.783 "w_mbytes_per_sec": 0 00:09:50.783 }, 00:09:50.783 "claimed": false, 00:09:50.783 "zoned": false, 00:09:50.783 "supported_io_types": { 00:09:50.783 "read": true, 00:09:50.783 "write": true, 00:09:50.783 "unmap": true, 00:09:50.783 "flush": true, 00:09:50.783 "reset": true, 00:09:50.783 "nvme_admin": false, 00:09:50.783 "nvme_io": false, 00:09:50.783 "nvme_io_md": false, 00:09:50.783 "write_zeroes": true, 00:09:50.783 "zcopy": false, 00:09:50.783 "get_zone_info": false, 00:09:50.783 "zone_management": false, 00:09:50.783 "zone_append": false, 00:09:50.783 "compare": false, 00:09:50.783 "compare_and_write": false, 00:09:50.783 "abort": false, 00:09:50.783 "seek_hole": false, 00:09:50.783 "seek_data": false, 00:09:50.783 "copy": false, 00:09:50.783 "nvme_iov_md": false 00:09:50.783 }, 00:09:50.783 "memory_domains": [ 00:09:50.783 { 00:09:50.783 "dma_device_id": "system", 00:09:50.783 "dma_device_type": 1 00:09:50.783 }, 00:09:50.783 { 00:09:50.783 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:50.783 "dma_device_type": 2 00:09:50.783 }, 00:09:50.783 { 00:09:50.783 "dma_device_id": "system", 00:09:50.783 "dma_device_type": 1 00:09:50.783 }, 00:09:50.783 { 00:09:50.783 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:50.783 "dma_device_type": 2 00:09:50.783 }, 00:09:50.783 { 00:09:50.783 "dma_device_id": "system", 00:09:50.783 "dma_device_type": 1 00:09:50.783 }, 00:09:50.783 { 00:09:50.783 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:50.783 "dma_device_type": 2 00:09:50.783 }, 00:09:50.783 { 00:09:50.783 "dma_device_id": "system", 00:09:50.783 "dma_device_type": 1 00:09:50.783 }, 00:09:50.783 { 00:09:50.783 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:50.783 "dma_device_type": 2 00:09:50.783 } 00:09:50.783 ], 00:09:50.783 "driver_specific": { 00:09:50.783 "raid": { 00:09:50.783 "uuid": "40f4eca8-7c24-4b8e-8213-e8aa3a9d4c87", 00:09:50.783 "strip_size_kb": 64, 00:09:50.783 "state": "online", 00:09:50.783 "raid_level": "raid0", 00:09:50.783 "superblock": true, 00:09:50.783 "num_base_bdevs": 4, 00:09:50.783 "num_base_bdevs_discovered": 4, 00:09:50.783 "num_base_bdevs_operational": 4, 00:09:50.783 "base_bdevs_list": [ 00:09:50.783 { 00:09:50.783 "name": "pt1", 00:09:50.783 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:50.783 "is_configured": true, 00:09:50.783 "data_offset": 2048, 00:09:50.783 "data_size": 63488 00:09:50.783 }, 00:09:50.783 { 00:09:50.783 "name": "pt2", 00:09:50.783 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:50.783 "is_configured": true, 00:09:50.783 "data_offset": 2048, 00:09:50.783 "data_size": 63488 00:09:50.783 }, 00:09:50.783 { 00:09:50.783 "name": "pt3", 00:09:50.783 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:50.783 "is_configured": true, 00:09:50.783 "data_offset": 2048, 00:09:50.783 "data_size": 63488 00:09:50.783 }, 00:09:50.783 { 00:09:50.783 "name": "pt4", 00:09:50.783 "uuid": "00000000-0000-0000-0000-000000000004", 00:09:50.783 "is_configured": true, 00:09:50.783 "data_offset": 2048, 00:09:50.783 "data_size": 63488 00:09:50.783 } 00:09:50.783 ] 00:09:50.783 } 00:09:50.783 } 00:09:50.783 }' 00:09:50.783 23:27:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:50.783 23:27:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:09:50.783 pt2 00:09:50.783 pt3 00:09:50.783 pt4' 00:09:50.783 23:27:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:51.043 23:27:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:51.043 23:27:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:51.043 23:27:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:51.043 23:27:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:09:51.043 23:27:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:51.043 23:27:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.043 23:27:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:51.043 23:27:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:51.043 23:27:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:51.043 23:27:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:51.043 23:27:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:09:51.043 23:27:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:51.043 23:27:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:51.043 23:27:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.043 23:27:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:51.043 23:27:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:51.043 23:27:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:51.043 23:27:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:51.043 23:27:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:09:51.043 23:27:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:51.043 23:27:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.043 23:27:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:51.043 23:27:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:51.043 23:27:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:51.043 23:27:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:51.043 23:27:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:51.043 23:27:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:09:51.043 23:27:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:51.043 23:27:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:51.043 23:27:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.043 23:27:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:51.043 23:27:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:51.043 23:27:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:51.043 23:27:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:09:51.043 23:27:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:51.043 23:27:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:51.043 23:27:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.043 [2024-09-30 23:27:30.842244] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:51.043 23:27:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:51.043 23:27:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 40f4eca8-7c24-4b8e-8213-e8aa3a9d4c87 '!=' 40f4eca8-7c24-4b8e-8213-e8aa3a9d4c87 ']' 00:09:51.043 23:27:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:09:51.043 23:27:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:51.043 23:27:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:51.043 23:27:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 81704 00:09:51.043 23:27:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 81704 ']' 00:09:51.043 23:27:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 81704 00:09:51.043 23:27:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:09:51.043 23:27:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:51.043 23:27:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 81704 00:09:51.302 killing process with pid 81704 00:09:51.302 23:27:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:51.302 23:27:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:51.302 23:27:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 81704' 00:09:51.302 23:27:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 81704 00:09:51.302 [2024-09-30 23:27:30.914465] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:51.302 23:27:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 81704 00:09:51.302 [2024-09-30 23:27:30.914550] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:51.302 [2024-09-30 23:27:30.914616] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:51.302 [2024-09-30 23:27:30.914635] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name raid_bdev1, state offline 00:09:51.302 [2024-09-30 23:27:30.958736] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:51.561 ************************************ 00:09:51.561 END TEST raid_superblock_test 00:09:51.561 ************************************ 00:09:51.561 23:27:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:09:51.561 00:09:51.561 real 0m4.167s 00:09:51.561 user 0m6.549s 00:09:51.561 sys 0m0.959s 00:09:51.561 23:27:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:51.561 23:27:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.561 23:27:31 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 4 read 00:09:51.561 23:27:31 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:09:51.561 23:27:31 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:51.561 23:27:31 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:51.561 ************************************ 00:09:51.561 START TEST raid_read_error_test 00:09:51.561 ************************************ 00:09:51.561 23:27:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid0 4 read 00:09:51.561 23:27:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:09:51.561 23:27:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:09:51.561 23:27:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:09:51.561 23:27:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:51.561 23:27:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:51.561 23:27:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:51.561 23:27:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:51.561 23:27:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:51.561 23:27:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:51.561 23:27:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:51.561 23:27:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:51.561 23:27:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:09:51.561 23:27:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:51.561 23:27:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:51.561 23:27:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:09:51.561 23:27:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:51.561 23:27:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:51.561 23:27:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:09:51.561 23:27:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:51.561 23:27:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:51.561 23:27:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:51.561 23:27:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:51.561 23:27:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:51.561 23:27:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:51.561 23:27:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:09:51.561 23:27:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:09:51.561 23:27:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:09:51.561 23:27:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:51.561 23:27:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.ByE2CqyjGH 00:09:51.561 23:27:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=81951 00:09:51.561 23:27:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 81951 00:09:51.561 23:27:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 81951 ']' 00:09:51.561 23:27:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:51.561 23:27:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:51.561 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:51.561 23:27:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:51.561 23:27:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:51.561 23:27:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.561 23:27:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:51.561 [2024-09-30 23:27:31.360795] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:09:51.561 [2024-09-30 23:27:31.360912] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81951 ] 00:09:51.820 [2024-09-30 23:27:31.519700] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:51.820 [2024-09-30 23:27:31.563604] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:51.820 [2024-09-30 23:27:31.606017] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:51.820 [2024-09-30 23:27:31.606067] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:52.389 23:27:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:52.389 23:27:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:09:52.389 23:27:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:52.389 23:27:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:52.389 23:27:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:52.389 23:27:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.389 BaseBdev1_malloc 00:09:52.389 23:27:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:52.389 23:27:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:52.389 23:27:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:52.389 23:27:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.389 true 00:09:52.389 23:27:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:52.389 23:27:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:52.389 23:27:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:52.389 23:27:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.389 [2024-09-30 23:27:32.212103] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:52.389 [2024-09-30 23:27:32.212170] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:52.390 [2024-09-30 23:27:32.212188] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:09:52.390 [2024-09-30 23:27:32.212196] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:52.390 [2024-09-30 23:27:32.214338] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:52.390 [2024-09-30 23:27:32.214377] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:52.390 BaseBdev1 00:09:52.390 23:27:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:52.390 23:27:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:52.390 23:27:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:52.390 23:27:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:52.390 23:27:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.390 BaseBdev2_malloc 00:09:52.390 23:27:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:52.390 23:27:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:52.390 23:27:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:52.390 23:27:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.667 true 00:09:52.667 23:27:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:52.667 23:27:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:52.667 23:27:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:52.667 23:27:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.667 [2024-09-30 23:27:32.258573] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:52.667 [2024-09-30 23:27:32.258640] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:52.667 [2024-09-30 23:27:32.258659] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:09:52.667 [2024-09-30 23:27:32.258668] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:52.667 [2024-09-30 23:27:32.260794] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:52.667 [2024-09-30 23:27:32.260834] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:52.667 BaseBdev2 00:09:52.667 23:27:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:52.667 23:27:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:52.667 23:27:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:09:52.667 23:27:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:52.667 23:27:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.667 BaseBdev3_malloc 00:09:52.667 23:27:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:52.667 23:27:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:09:52.667 23:27:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:52.667 23:27:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.667 true 00:09:52.667 23:27:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:52.668 23:27:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:09:52.668 23:27:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:52.668 23:27:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.668 [2024-09-30 23:27:32.299563] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:09:52.668 [2024-09-30 23:27:32.299616] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:52.668 [2024-09-30 23:27:32.299634] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:09:52.668 [2024-09-30 23:27:32.299643] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:52.668 [2024-09-30 23:27:32.301779] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:52.668 [2024-09-30 23:27:32.301820] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:09:52.668 BaseBdev3 00:09:52.668 23:27:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:52.668 23:27:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:52.668 23:27:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:09:52.668 23:27:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:52.668 23:27:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.668 BaseBdev4_malloc 00:09:52.668 23:27:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:52.668 23:27:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:09:52.668 23:27:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:52.668 23:27:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.668 true 00:09:52.668 23:27:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:52.668 23:27:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:09:52.668 23:27:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:52.668 23:27:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.668 [2024-09-30 23:27:32.340157] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:09:52.668 [2024-09-30 23:27:32.340207] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:52.668 [2024-09-30 23:27:32.340227] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:09:52.668 [2024-09-30 23:27:32.340236] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:52.668 [2024-09-30 23:27:32.342224] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:52.668 [2024-09-30 23:27:32.342259] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:09:52.668 BaseBdev4 00:09:52.668 23:27:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:52.668 23:27:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:09:52.668 23:27:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:52.668 23:27:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.668 [2024-09-30 23:27:32.352168] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:52.668 [2024-09-30 23:27:32.354043] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:52.668 [2024-09-30 23:27:32.354134] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:52.668 [2024-09-30 23:27:32.354187] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:09:52.668 [2024-09-30 23:27:32.354394] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007080 00:09:52.668 [2024-09-30 23:27:32.354410] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:09:52.668 [2024-09-30 23:27:32.354657] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:09:52.668 [2024-09-30 23:27:32.354807] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007080 00:09:52.668 [2024-09-30 23:27:32.354824] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007080 00:09:52.668 [2024-09-30 23:27:32.354955] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:52.668 23:27:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:52.668 23:27:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:09:52.668 23:27:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:52.668 23:27:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:52.668 23:27:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:52.668 23:27:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:52.668 23:27:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:52.668 23:27:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:52.668 23:27:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:52.668 23:27:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:52.668 23:27:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:52.668 23:27:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:52.668 23:27:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:52.668 23:27:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:52.668 23:27:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.668 23:27:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:52.668 23:27:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:52.668 "name": "raid_bdev1", 00:09:52.668 "uuid": "04c8b04e-00eb-4dee-8733-1e4f241a5807", 00:09:52.668 "strip_size_kb": 64, 00:09:52.668 "state": "online", 00:09:52.668 "raid_level": "raid0", 00:09:52.668 "superblock": true, 00:09:52.668 "num_base_bdevs": 4, 00:09:52.668 "num_base_bdevs_discovered": 4, 00:09:52.668 "num_base_bdevs_operational": 4, 00:09:52.668 "base_bdevs_list": [ 00:09:52.668 { 00:09:52.668 "name": "BaseBdev1", 00:09:52.668 "uuid": "1843e0ff-4d54-568e-ba50-64d0e6cee086", 00:09:52.668 "is_configured": true, 00:09:52.668 "data_offset": 2048, 00:09:52.668 "data_size": 63488 00:09:52.668 }, 00:09:52.668 { 00:09:52.668 "name": "BaseBdev2", 00:09:52.668 "uuid": "1a999c14-a52b-5fc8-90a2-6a9adacf14c8", 00:09:52.668 "is_configured": true, 00:09:52.668 "data_offset": 2048, 00:09:52.668 "data_size": 63488 00:09:52.668 }, 00:09:52.668 { 00:09:52.668 "name": "BaseBdev3", 00:09:52.668 "uuid": "1e91badf-1b99-5e8c-9606-3f7f11fe44f0", 00:09:52.668 "is_configured": true, 00:09:52.668 "data_offset": 2048, 00:09:52.668 "data_size": 63488 00:09:52.668 }, 00:09:52.668 { 00:09:52.668 "name": "BaseBdev4", 00:09:52.668 "uuid": "e28dd5bd-f6d8-59e6-9933-81136a9e2b3a", 00:09:52.668 "is_configured": true, 00:09:52.668 "data_offset": 2048, 00:09:52.668 "data_size": 63488 00:09:52.668 } 00:09:52.668 ] 00:09:52.668 }' 00:09:52.668 23:27:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:52.668 23:27:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.927 23:27:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:52.927 23:27:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:53.187 [2024-09-30 23:27:32.867629] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:09:54.124 23:27:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:09:54.124 23:27:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.124 23:27:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.124 23:27:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.124 23:27:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:54.124 23:27:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:09:54.124 23:27:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:09:54.124 23:27:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:09:54.124 23:27:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:54.124 23:27:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:54.124 23:27:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:54.124 23:27:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:54.124 23:27:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:54.124 23:27:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:54.124 23:27:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:54.124 23:27:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:54.124 23:27:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:54.124 23:27:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:54.124 23:27:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:54.124 23:27:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.124 23:27:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.124 23:27:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.124 23:27:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:54.124 "name": "raid_bdev1", 00:09:54.124 "uuid": "04c8b04e-00eb-4dee-8733-1e4f241a5807", 00:09:54.124 "strip_size_kb": 64, 00:09:54.124 "state": "online", 00:09:54.124 "raid_level": "raid0", 00:09:54.124 "superblock": true, 00:09:54.124 "num_base_bdevs": 4, 00:09:54.124 "num_base_bdevs_discovered": 4, 00:09:54.124 "num_base_bdevs_operational": 4, 00:09:54.124 "base_bdevs_list": [ 00:09:54.124 { 00:09:54.124 "name": "BaseBdev1", 00:09:54.124 "uuid": "1843e0ff-4d54-568e-ba50-64d0e6cee086", 00:09:54.124 "is_configured": true, 00:09:54.124 "data_offset": 2048, 00:09:54.124 "data_size": 63488 00:09:54.124 }, 00:09:54.124 { 00:09:54.124 "name": "BaseBdev2", 00:09:54.124 "uuid": "1a999c14-a52b-5fc8-90a2-6a9adacf14c8", 00:09:54.124 "is_configured": true, 00:09:54.124 "data_offset": 2048, 00:09:54.124 "data_size": 63488 00:09:54.124 }, 00:09:54.124 { 00:09:54.124 "name": "BaseBdev3", 00:09:54.124 "uuid": "1e91badf-1b99-5e8c-9606-3f7f11fe44f0", 00:09:54.124 "is_configured": true, 00:09:54.124 "data_offset": 2048, 00:09:54.124 "data_size": 63488 00:09:54.124 }, 00:09:54.124 { 00:09:54.124 "name": "BaseBdev4", 00:09:54.124 "uuid": "e28dd5bd-f6d8-59e6-9933-81136a9e2b3a", 00:09:54.124 "is_configured": true, 00:09:54.124 "data_offset": 2048, 00:09:54.124 "data_size": 63488 00:09:54.124 } 00:09:54.124 ] 00:09:54.124 }' 00:09:54.124 23:27:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:54.124 23:27:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.383 23:27:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:54.383 23:27:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.383 23:27:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.383 [2024-09-30 23:27:34.187168] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:54.383 [2024-09-30 23:27:34.187202] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:54.383 [2024-09-30 23:27:34.189730] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:54.383 [2024-09-30 23:27:34.189786] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:54.383 [2024-09-30 23:27:34.189829] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:54.383 [2024-09-30 23:27:34.189838] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007080 name raid_bdev1, state offline 00:09:54.383 { 00:09:54.383 "results": [ 00:09:54.383 { 00:09:54.383 "job": "raid_bdev1", 00:09:54.383 "core_mask": "0x1", 00:09:54.383 "workload": "randrw", 00:09:54.383 "percentage": 50, 00:09:54.383 "status": "finished", 00:09:54.383 "queue_depth": 1, 00:09:54.383 "io_size": 131072, 00:09:54.383 "runtime": 1.320274, 00:09:54.383 "iops": 17146.44081455819, 00:09:54.383 "mibps": 2143.305101819774, 00:09:54.383 "io_failed": 1, 00:09:54.383 "io_timeout": 0, 00:09:54.383 "avg_latency_us": 80.94715495596249, 00:09:54.383 "min_latency_us": 24.146724890829695, 00:09:54.383 "max_latency_us": 1323.598253275109 00:09:54.383 } 00:09:54.383 ], 00:09:54.383 "core_count": 1 00:09:54.383 } 00:09:54.383 23:27:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.383 23:27:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 81951 00:09:54.383 23:27:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 81951 ']' 00:09:54.383 23:27:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 81951 00:09:54.383 23:27:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:09:54.383 23:27:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:54.383 23:27:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 81951 00:09:54.383 23:27:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:54.383 23:27:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:54.383 killing process with pid 81951 00:09:54.383 23:27:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 81951' 00:09:54.383 23:27:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 81951 00:09:54.383 [2024-09-30 23:27:34.233296] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:54.383 23:27:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 81951 00:09:54.642 [2024-09-30 23:27:34.269744] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:54.901 23:27:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.ByE2CqyjGH 00:09:54.901 23:27:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:54.901 23:27:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:54.901 23:27:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.76 00:09:54.901 23:27:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:09:54.901 23:27:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:54.901 23:27:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:54.902 23:27:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.76 != \0\.\0\0 ]] 00:09:54.902 00:09:54.902 real 0m3.248s 00:09:54.902 user 0m4.041s 00:09:54.902 sys 0m0.538s 00:09:54.902 23:27:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:54.902 23:27:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.902 ************************************ 00:09:54.902 END TEST raid_read_error_test 00:09:54.902 ************************************ 00:09:54.902 23:27:34 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 4 write 00:09:54.902 23:27:34 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:09:54.902 23:27:34 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:54.902 23:27:34 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:54.902 ************************************ 00:09:54.902 START TEST raid_write_error_test 00:09:54.902 ************************************ 00:09:54.902 23:27:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid0 4 write 00:09:54.902 23:27:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:09:54.902 23:27:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:09:54.902 23:27:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:09:54.902 23:27:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:54.902 23:27:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:54.902 23:27:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:54.902 23:27:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:54.902 23:27:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:54.902 23:27:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:54.902 23:27:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:54.902 23:27:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:54.902 23:27:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:09:54.902 23:27:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:54.902 23:27:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:54.902 23:27:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:09:54.902 23:27:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:54.902 23:27:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:54.902 23:27:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:09:54.902 23:27:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:54.902 23:27:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:54.902 23:27:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:54.902 23:27:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:54.902 23:27:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:54.902 23:27:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:54.902 23:27:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:09:54.902 23:27:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:09:54.902 23:27:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:09:54.902 23:27:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:54.902 23:27:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.YHrFycpo43 00:09:54.902 23:27:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=82081 00:09:54.902 23:27:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 82081 00:09:54.902 23:27:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:54.902 23:27:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 82081 ']' 00:09:54.902 23:27:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:54.902 23:27:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:54.902 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:54.902 23:27:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:54.902 23:27:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:54.902 23:27:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.902 [2024-09-30 23:27:34.691480] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:09:54.902 [2024-09-30 23:27:34.691593] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82081 ] 00:09:55.161 [2024-09-30 23:27:34.849417] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:55.161 [2024-09-30 23:27:34.893348] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:55.161 [2024-09-30 23:27:34.935477] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:55.161 [2024-09-30 23:27:34.935517] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:55.742 23:27:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:55.742 23:27:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:09:55.742 23:27:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:55.742 23:27:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:55.742 23:27:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:55.742 23:27:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.742 BaseBdev1_malloc 00:09:55.742 23:27:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:55.742 23:27:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:55.742 23:27:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:55.742 23:27:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.742 true 00:09:55.742 23:27:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:55.742 23:27:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:55.742 23:27:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:55.742 23:27:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.742 [2024-09-30 23:27:35.537727] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:55.742 [2024-09-30 23:27:35.537795] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:55.742 [2024-09-30 23:27:35.537821] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:09:55.742 [2024-09-30 23:27:35.537833] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:55.742 [2024-09-30 23:27:35.540307] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:55.742 [2024-09-30 23:27:35.540348] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:55.742 BaseBdev1 00:09:55.742 23:27:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:55.742 23:27:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:55.742 23:27:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:55.742 23:27:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:55.742 23:27:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.742 BaseBdev2_malloc 00:09:55.742 23:27:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:55.742 23:27:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:55.742 23:27:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:55.742 23:27:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.742 true 00:09:55.742 23:27:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:55.742 23:27:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:55.742 23:27:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:55.742 23:27:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.742 [2024-09-30 23:27:35.589339] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:55.742 [2024-09-30 23:27:35.589404] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:55.742 [2024-09-30 23:27:35.589430] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:09:55.742 [2024-09-30 23:27:35.589443] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:55.742 [2024-09-30 23:27:35.592304] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:55.742 [2024-09-30 23:27:35.592354] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:55.742 BaseBdev2 00:09:56.006 23:27:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:56.006 23:27:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:56.006 23:27:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:09:56.006 23:27:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:56.006 23:27:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.006 BaseBdev3_malloc 00:09:56.006 23:27:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:56.006 23:27:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:09:56.006 23:27:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:56.006 23:27:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.006 true 00:09:56.006 23:27:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:56.006 23:27:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:09:56.006 23:27:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:56.006 23:27:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.006 [2024-09-30 23:27:35.629889] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:09:56.006 [2024-09-30 23:27:35.629932] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:56.006 [2024-09-30 23:27:35.629949] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:09:56.006 [2024-09-30 23:27:35.629959] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:56.006 [2024-09-30 23:27:35.632060] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:56.006 [2024-09-30 23:27:35.632101] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:09:56.006 BaseBdev3 00:09:56.006 23:27:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:56.006 23:27:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:56.006 23:27:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:09:56.006 23:27:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:56.006 23:27:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.006 BaseBdev4_malloc 00:09:56.006 23:27:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:56.006 23:27:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:09:56.006 23:27:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:56.006 23:27:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.006 true 00:09:56.006 23:27:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:56.006 23:27:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:09:56.006 23:27:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:56.006 23:27:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.006 [2024-09-30 23:27:35.670392] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:09:56.006 [2024-09-30 23:27:35.670442] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:56.006 [2024-09-30 23:27:35.670463] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:09:56.006 [2024-09-30 23:27:35.670472] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:56.006 BaseBdev4 00:09:56.006 [2024-09-30 23:27:35.672520] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:56.006 [2024-09-30 23:27:35.672554] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:09:56.006 23:27:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:56.006 23:27:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:09:56.006 23:27:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:56.006 23:27:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.006 [2024-09-30 23:27:35.682425] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:56.006 [2024-09-30 23:27:35.684313] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:56.006 [2024-09-30 23:27:35.684406] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:56.006 [2024-09-30 23:27:35.684460] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:09:56.006 [2024-09-30 23:27:35.684655] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007080 00:09:56.006 [2024-09-30 23:27:35.684671] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:09:56.006 [2024-09-30 23:27:35.684961] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:09:56.006 [2024-09-30 23:27:35.685119] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007080 00:09:56.006 [2024-09-30 23:27:35.685133] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007080 00:09:56.006 [2024-09-30 23:27:35.685249] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:56.006 23:27:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:56.006 23:27:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:09:56.006 23:27:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:56.006 23:27:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:56.006 23:27:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:56.006 23:27:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:56.006 23:27:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:56.006 23:27:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:56.006 23:27:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:56.006 23:27:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:56.006 23:27:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:56.006 23:27:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:56.007 23:27:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:56.007 23:27:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:56.007 23:27:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.007 23:27:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:56.007 23:27:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:56.007 "name": "raid_bdev1", 00:09:56.007 "uuid": "43c0085f-afb9-4619-b4a5-b2fc7f0bbe11", 00:09:56.007 "strip_size_kb": 64, 00:09:56.007 "state": "online", 00:09:56.007 "raid_level": "raid0", 00:09:56.007 "superblock": true, 00:09:56.007 "num_base_bdevs": 4, 00:09:56.007 "num_base_bdevs_discovered": 4, 00:09:56.007 "num_base_bdevs_operational": 4, 00:09:56.007 "base_bdevs_list": [ 00:09:56.007 { 00:09:56.007 "name": "BaseBdev1", 00:09:56.007 "uuid": "df674a6a-017a-5a27-9e57-e1a431cbc3d3", 00:09:56.007 "is_configured": true, 00:09:56.007 "data_offset": 2048, 00:09:56.007 "data_size": 63488 00:09:56.007 }, 00:09:56.007 { 00:09:56.007 "name": "BaseBdev2", 00:09:56.007 "uuid": "6784aefe-be76-5aee-ac00-ae29f05dc6f3", 00:09:56.007 "is_configured": true, 00:09:56.007 "data_offset": 2048, 00:09:56.007 "data_size": 63488 00:09:56.007 }, 00:09:56.007 { 00:09:56.007 "name": "BaseBdev3", 00:09:56.007 "uuid": "2c188ebc-5d0a-5c0e-b879-840d774d5305", 00:09:56.007 "is_configured": true, 00:09:56.007 "data_offset": 2048, 00:09:56.007 "data_size": 63488 00:09:56.007 }, 00:09:56.007 { 00:09:56.007 "name": "BaseBdev4", 00:09:56.007 "uuid": "4eef5b55-caed-5b0f-b730-89eca8f039b6", 00:09:56.007 "is_configured": true, 00:09:56.007 "data_offset": 2048, 00:09:56.007 "data_size": 63488 00:09:56.007 } 00:09:56.007 ] 00:09:56.007 }' 00:09:56.007 23:27:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:56.007 23:27:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.266 23:27:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:56.266 23:27:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:56.525 [2024-09-30 23:27:36.213841] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:09:57.463 23:27:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:09:57.463 23:27:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:57.463 23:27:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.463 23:27:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:57.463 23:27:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:57.463 23:27:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:09:57.463 23:27:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:09:57.463 23:27:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:09:57.463 23:27:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:57.463 23:27:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:57.463 23:27:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:57.464 23:27:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:57.464 23:27:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:57.464 23:27:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:57.464 23:27:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:57.464 23:27:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:57.464 23:27:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:57.464 23:27:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:57.464 23:27:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:57.464 23:27:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:57.464 23:27:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.464 23:27:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:57.464 23:27:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:57.464 "name": "raid_bdev1", 00:09:57.464 "uuid": "43c0085f-afb9-4619-b4a5-b2fc7f0bbe11", 00:09:57.464 "strip_size_kb": 64, 00:09:57.464 "state": "online", 00:09:57.464 "raid_level": "raid0", 00:09:57.464 "superblock": true, 00:09:57.464 "num_base_bdevs": 4, 00:09:57.464 "num_base_bdevs_discovered": 4, 00:09:57.464 "num_base_bdevs_operational": 4, 00:09:57.464 "base_bdevs_list": [ 00:09:57.464 { 00:09:57.464 "name": "BaseBdev1", 00:09:57.464 "uuid": "df674a6a-017a-5a27-9e57-e1a431cbc3d3", 00:09:57.464 "is_configured": true, 00:09:57.464 "data_offset": 2048, 00:09:57.464 "data_size": 63488 00:09:57.464 }, 00:09:57.464 { 00:09:57.464 "name": "BaseBdev2", 00:09:57.464 "uuid": "6784aefe-be76-5aee-ac00-ae29f05dc6f3", 00:09:57.464 "is_configured": true, 00:09:57.464 "data_offset": 2048, 00:09:57.464 "data_size": 63488 00:09:57.464 }, 00:09:57.464 { 00:09:57.464 "name": "BaseBdev3", 00:09:57.464 "uuid": "2c188ebc-5d0a-5c0e-b879-840d774d5305", 00:09:57.464 "is_configured": true, 00:09:57.464 "data_offset": 2048, 00:09:57.464 "data_size": 63488 00:09:57.464 }, 00:09:57.464 { 00:09:57.464 "name": "BaseBdev4", 00:09:57.464 "uuid": "4eef5b55-caed-5b0f-b730-89eca8f039b6", 00:09:57.464 "is_configured": true, 00:09:57.464 "data_offset": 2048, 00:09:57.464 "data_size": 63488 00:09:57.464 } 00:09:57.464 ] 00:09:57.464 }' 00:09:57.464 23:27:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:57.464 23:27:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.032 23:27:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:58.032 23:27:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:58.032 23:27:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.032 [2024-09-30 23:27:37.625951] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:58.032 [2024-09-30 23:27:37.625987] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:58.032 [2024-09-30 23:27:37.628507] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:58.032 [2024-09-30 23:27:37.628570] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:58.032 [2024-09-30 23:27:37.628616] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:58.032 [2024-09-30 23:27:37.628625] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007080 name raid_bdev1, state offline 00:09:58.032 { 00:09:58.032 "results": [ 00:09:58.032 { 00:09:58.032 "job": "raid_bdev1", 00:09:58.032 "core_mask": "0x1", 00:09:58.032 "workload": "randrw", 00:09:58.032 "percentage": 50, 00:09:58.032 "status": "finished", 00:09:58.032 "queue_depth": 1, 00:09:58.032 "io_size": 131072, 00:09:58.032 "runtime": 1.412985, 00:09:58.032 "iops": 16817.58829711568, 00:09:58.032 "mibps": 2102.19853713946, 00:09:58.032 "io_failed": 1, 00:09:58.032 "io_timeout": 0, 00:09:58.032 "avg_latency_us": 82.5178970208506, 00:09:58.032 "min_latency_us": 25.152838427947597, 00:09:58.032 "max_latency_us": 1366.5257641921398 00:09:58.032 } 00:09:58.033 ], 00:09:58.033 "core_count": 1 00:09:58.033 } 00:09:58.033 23:27:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:58.033 23:27:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 82081 00:09:58.033 23:27:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 82081 ']' 00:09:58.033 23:27:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 82081 00:09:58.033 23:27:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:09:58.033 23:27:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:58.033 23:27:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 82081 00:09:58.033 23:27:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:58.033 23:27:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:58.033 killing process with pid 82081 00:09:58.033 23:27:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 82081' 00:09:58.033 23:27:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 82081 00:09:58.033 [2024-09-30 23:27:37.669167] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:58.033 23:27:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 82081 00:09:58.033 [2024-09-30 23:27:37.705500] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:58.292 23:27:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.YHrFycpo43 00:09:58.292 23:27:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:58.292 23:27:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:58.292 23:27:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.71 00:09:58.292 23:27:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:09:58.292 23:27:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:58.292 23:27:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:58.292 23:27:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.71 != \0\.\0\0 ]] 00:09:58.292 00:09:58.292 real 0m3.366s 00:09:58.292 user 0m4.246s 00:09:58.292 sys 0m0.565s 00:09:58.292 23:27:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:58.292 23:27:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.292 ************************************ 00:09:58.292 END TEST raid_write_error_test 00:09:58.292 ************************************ 00:09:58.292 23:27:38 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:09:58.292 23:27:38 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 4 false 00:09:58.292 23:27:38 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:09:58.292 23:27:38 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:58.292 23:27:38 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:58.292 ************************************ 00:09:58.292 START TEST raid_state_function_test 00:09:58.292 ************************************ 00:09:58.292 23:27:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test concat 4 false 00:09:58.292 23:27:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:09:58.292 23:27:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:09:58.292 23:27:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:09:58.292 23:27:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:58.292 23:27:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:58.292 23:27:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:58.292 23:27:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:58.293 23:27:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:58.293 23:27:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:58.293 23:27:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:58.293 23:27:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:58.293 23:27:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:58.293 23:27:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:09:58.293 23:27:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:58.293 23:27:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:58.293 23:27:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:09:58.293 23:27:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:58.293 23:27:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:58.293 23:27:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:09:58.293 23:27:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:58.293 23:27:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:58.293 23:27:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:58.293 23:27:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:58.293 23:27:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:58.293 23:27:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:09:58.293 23:27:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:09:58.293 23:27:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:09:58.293 23:27:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:09:58.293 23:27:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:09:58.293 23:27:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=82214 00:09:58.293 23:27:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:58.293 23:27:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 82214' 00:09:58.293 Process raid pid: 82214 00:09:58.293 23:27:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 82214 00:09:58.293 23:27:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 82214 ']' 00:09:58.293 23:27:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:58.293 23:27:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:58.293 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:58.293 23:27:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:58.293 23:27:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:58.293 23:27:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.293 [2024-09-30 23:27:38.122227] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:09:58.293 [2024-09-30 23:27:38.122363] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:58.552 [2024-09-30 23:27:38.283624] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:58.552 [2024-09-30 23:27:38.328355] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:58.552 [2024-09-30 23:27:38.370398] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:58.552 [2024-09-30 23:27:38.370441] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:59.120 23:27:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:59.120 23:27:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:09:59.121 23:27:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:09:59.121 23:27:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:59.121 23:27:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.121 [2024-09-30 23:27:38.947718] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:59.121 [2024-09-30 23:27:38.947785] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:59.121 [2024-09-30 23:27:38.947810] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:59.121 [2024-09-30 23:27:38.947826] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:59.121 [2024-09-30 23:27:38.947842] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:59.121 [2024-09-30 23:27:38.947869] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:59.121 [2024-09-30 23:27:38.947880] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:09:59.121 [2024-09-30 23:27:38.947894] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:09:59.121 23:27:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:59.121 23:27:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:09:59.121 23:27:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:59.121 23:27:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:59.121 23:27:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:59.121 23:27:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:59.121 23:27:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:59.121 23:27:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:59.121 23:27:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:59.121 23:27:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:59.121 23:27:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:59.121 23:27:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:59.121 23:27:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:59.121 23:27:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:59.121 23:27:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.379 23:27:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:59.379 23:27:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:59.379 "name": "Existed_Raid", 00:09:59.379 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:59.379 "strip_size_kb": 64, 00:09:59.379 "state": "configuring", 00:09:59.379 "raid_level": "concat", 00:09:59.379 "superblock": false, 00:09:59.379 "num_base_bdevs": 4, 00:09:59.379 "num_base_bdevs_discovered": 0, 00:09:59.379 "num_base_bdevs_operational": 4, 00:09:59.379 "base_bdevs_list": [ 00:09:59.379 { 00:09:59.379 "name": "BaseBdev1", 00:09:59.379 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:59.379 "is_configured": false, 00:09:59.379 "data_offset": 0, 00:09:59.379 "data_size": 0 00:09:59.379 }, 00:09:59.379 { 00:09:59.379 "name": "BaseBdev2", 00:09:59.379 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:59.379 "is_configured": false, 00:09:59.379 "data_offset": 0, 00:09:59.379 "data_size": 0 00:09:59.379 }, 00:09:59.379 { 00:09:59.379 "name": "BaseBdev3", 00:09:59.379 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:59.379 "is_configured": false, 00:09:59.379 "data_offset": 0, 00:09:59.379 "data_size": 0 00:09:59.379 }, 00:09:59.379 { 00:09:59.379 "name": "BaseBdev4", 00:09:59.379 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:59.379 "is_configured": false, 00:09:59.379 "data_offset": 0, 00:09:59.379 "data_size": 0 00:09:59.379 } 00:09:59.379 ] 00:09:59.379 }' 00:09:59.379 23:27:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:59.379 23:27:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.638 23:27:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:59.638 23:27:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:59.638 23:27:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.638 [2024-09-30 23:27:39.386865] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:59.638 [2024-09-30 23:27:39.386925] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring 00:09:59.638 23:27:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:59.638 23:27:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:09:59.638 23:27:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:59.638 23:27:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.638 [2024-09-30 23:27:39.398921] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:59.638 [2024-09-30 23:27:39.398967] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:59.638 [2024-09-30 23:27:39.398989] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:59.638 [2024-09-30 23:27:39.399004] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:59.638 [2024-09-30 23:27:39.399013] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:59.638 [2024-09-30 23:27:39.399027] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:59.638 [2024-09-30 23:27:39.399036] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:09:59.638 [2024-09-30 23:27:39.399049] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:09:59.638 23:27:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:59.638 23:27:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:59.638 23:27:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:59.638 23:27:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.638 [2024-09-30 23:27:39.419676] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:59.638 BaseBdev1 00:09:59.638 23:27:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:59.638 23:27:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:09:59.638 23:27:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:09:59.638 23:27:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:59.638 23:27:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:59.638 23:27:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:59.638 23:27:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:59.638 23:27:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:59.638 23:27:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:59.638 23:27:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.638 23:27:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:59.638 23:27:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:59.638 23:27:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:59.638 23:27:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.638 [ 00:09:59.638 { 00:09:59.638 "name": "BaseBdev1", 00:09:59.638 "aliases": [ 00:09:59.638 "25ddd139-bb00-4054-bf3f-9fd1e391de4b" 00:09:59.638 ], 00:09:59.638 "product_name": "Malloc disk", 00:09:59.638 "block_size": 512, 00:09:59.638 "num_blocks": 65536, 00:09:59.638 "uuid": "25ddd139-bb00-4054-bf3f-9fd1e391de4b", 00:09:59.638 "assigned_rate_limits": { 00:09:59.638 "rw_ios_per_sec": 0, 00:09:59.638 "rw_mbytes_per_sec": 0, 00:09:59.638 "r_mbytes_per_sec": 0, 00:09:59.638 "w_mbytes_per_sec": 0 00:09:59.638 }, 00:09:59.638 "claimed": true, 00:09:59.638 "claim_type": "exclusive_write", 00:09:59.638 "zoned": false, 00:09:59.638 "supported_io_types": { 00:09:59.638 "read": true, 00:09:59.638 "write": true, 00:09:59.638 "unmap": true, 00:09:59.638 "flush": true, 00:09:59.638 "reset": true, 00:09:59.638 "nvme_admin": false, 00:09:59.638 "nvme_io": false, 00:09:59.638 "nvme_io_md": false, 00:09:59.638 "write_zeroes": true, 00:09:59.638 "zcopy": true, 00:09:59.638 "get_zone_info": false, 00:09:59.638 "zone_management": false, 00:09:59.638 "zone_append": false, 00:09:59.638 "compare": false, 00:09:59.638 "compare_and_write": false, 00:09:59.638 "abort": true, 00:09:59.638 "seek_hole": false, 00:09:59.638 "seek_data": false, 00:09:59.638 "copy": true, 00:09:59.638 "nvme_iov_md": false 00:09:59.638 }, 00:09:59.638 "memory_domains": [ 00:09:59.638 { 00:09:59.638 "dma_device_id": "system", 00:09:59.638 "dma_device_type": 1 00:09:59.638 }, 00:09:59.638 { 00:09:59.638 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:59.638 "dma_device_type": 2 00:09:59.638 } 00:09:59.638 ], 00:09:59.638 "driver_specific": {} 00:09:59.638 } 00:09:59.638 ] 00:09:59.638 23:27:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:59.638 23:27:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:59.638 23:27:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:09:59.638 23:27:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:59.638 23:27:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:59.638 23:27:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:59.638 23:27:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:59.638 23:27:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:59.638 23:27:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:59.638 23:27:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:59.638 23:27:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:59.638 23:27:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:59.638 23:27:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:59.638 23:27:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:59.638 23:27:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:59.638 23:27:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.638 23:27:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:59.638 23:27:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:59.638 "name": "Existed_Raid", 00:09:59.638 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:59.638 "strip_size_kb": 64, 00:09:59.638 "state": "configuring", 00:09:59.638 "raid_level": "concat", 00:09:59.638 "superblock": false, 00:09:59.638 "num_base_bdevs": 4, 00:09:59.638 "num_base_bdevs_discovered": 1, 00:09:59.638 "num_base_bdevs_operational": 4, 00:09:59.638 "base_bdevs_list": [ 00:09:59.638 { 00:09:59.638 "name": "BaseBdev1", 00:09:59.638 "uuid": "25ddd139-bb00-4054-bf3f-9fd1e391de4b", 00:09:59.638 "is_configured": true, 00:09:59.638 "data_offset": 0, 00:09:59.638 "data_size": 65536 00:09:59.638 }, 00:09:59.638 { 00:09:59.638 "name": "BaseBdev2", 00:09:59.638 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:59.638 "is_configured": false, 00:09:59.638 "data_offset": 0, 00:09:59.638 "data_size": 0 00:09:59.638 }, 00:09:59.638 { 00:09:59.638 "name": "BaseBdev3", 00:09:59.638 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:59.638 "is_configured": false, 00:09:59.638 "data_offset": 0, 00:09:59.638 "data_size": 0 00:09:59.638 }, 00:09:59.638 { 00:09:59.638 "name": "BaseBdev4", 00:09:59.638 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:59.638 "is_configured": false, 00:09:59.638 "data_offset": 0, 00:09:59.638 "data_size": 0 00:09:59.638 } 00:09:59.638 ] 00:09:59.638 }' 00:09:59.638 23:27:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:59.638 23:27:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.207 23:27:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:00.207 23:27:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:00.207 23:27:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.207 [2024-09-30 23:27:39.890964] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:00.207 [2024-09-30 23:27:39.891035] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring 00:10:00.207 23:27:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:00.207 23:27:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:00.207 23:27:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:00.207 23:27:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.207 [2024-09-30 23:27:39.902992] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:00.207 [2024-09-30 23:27:39.904854] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:00.207 [2024-09-30 23:27:39.904915] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:00.207 [2024-09-30 23:27:39.904932] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:00.207 [2024-09-30 23:27:39.904945] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:00.207 [2024-09-30 23:27:39.904956] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:00.207 [2024-09-30 23:27:39.904970] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:00.207 23:27:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:00.207 23:27:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:10:00.207 23:27:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:00.207 23:27:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:00.207 23:27:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:00.207 23:27:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:00.207 23:27:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:00.207 23:27:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:00.207 23:27:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:00.207 23:27:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:00.207 23:27:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:00.207 23:27:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:00.207 23:27:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:00.207 23:27:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:00.207 23:27:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:00.207 23:27:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:00.207 23:27:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.207 23:27:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:00.207 23:27:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:00.207 "name": "Existed_Raid", 00:10:00.207 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:00.207 "strip_size_kb": 64, 00:10:00.207 "state": "configuring", 00:10:00.207 "raid_level": "concat", 00:10:00.207 "superblock": false, 00:10:00.207 "num_base_bdevs": 4, 00:10:00.207 "num_base_bdevs_discovered": 1, 00:10:00.207 "num_base_bdevs_operational": 4, 00:10:00.207 "base_bdevs_list": [ 00:10:00.207 { 00:10:00.207 "name": "BaseBdev1", 00:10:00.207 "uuid": "25ddd139-bb00-4054-bf3f-9fd1e391de4b", 00:10:00.207 "is_configured": true, 00:10:00.207 "data_offset": 0, 00:10:00.207 "data_size": 65536 00:10:00.207 }, 00:10:00.207 { 00:10:00.207 "name": "BaseBdev2", 00:10:00.207 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:00.207 "is_configured": false, 00:10:00.207 "data_offset": 0, 00:10:00.207 "data_size": 0 00:10:00.207 }, 00:10:00.207 { 00:10:00.207 "name": "BaseBdev3", 00:10:00.207 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:00.207 "is_configured": false, 00:10:00.207 "data_offset": 0, 00:10:00.207 "data_size": 0 00:10:00.207 }, 00:10:00.207 { 00:10:00.207 "name": "BaseBdev4", 00:10:00.207 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:00.207 "is_configured": false, 00:10:00.207 "data_offset": 0, 00:10:00.207 "data_size": 0 00:10:00.207 } 00:10:00.207 ] 00:10:00.207 }' 00:10:00.207 23:27:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:00.207 23:27:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.777 23:27:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:00.777 23:27:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:00.777 23:27:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.777 [2024-09-30 23:27:40.361609] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:00.777 BaseBdev2 00:10:00.777 23:27:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:00.777 23:27:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:10:00.777 23:27:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:10:00.777 23:27:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:00.777 23:27:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:00.777 23:27:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:00.777 23:27:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:00.777 23:27:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:00.777 23:27:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:00.777 23:27:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.777 23:27:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:00.777 23:27:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:00.777 23:27:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:00.777 23:27:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.777 [ 00:10:00.777 { 00:10:00.777 "name": "BaseBdev2", 00:10:00.777 "aliases": [ 00:10:00.777 "f1636f05-c903-4474-9bd1-5c97058c3576" 00:10:00.777 ], 00:10:00.777 "product_name": "Malloc disk", 00:10:00.777 "block_size": 512, 00:10:00.777 "num_blocks": 65536, 00:10:00.777 "uuid": "f1636f05-c903-4474-9bd1-5c97058c3576", 00:10:00.777 "assigned_rate_limits": { 00:10:00.777 "rw_ios_per_sec": 0, 00:10:00.777 "rw_mbytes_per_sec": 0, 00:10:00.777 "r_mbytes_per_sec": 0, 00:10:00.777 "w_mbytes_per_sec": 0 00:10:00.777 }, 00:10:00.777 "claimed": true, 00:10:00.777 "claim_type": "exclusive_write", 00:10:00.777 "zoned": false, 00:10:00.777 "supported_io_types": { 00:10:00.777 "read": true, 00:10:00.777 "write": true, 00:10:00.777 "unmap": true, 00:10:00.777 "flush": true, 00:10:00.777 "reset": true, 00:10:00.777 "nvme_admin": false, 00:10:00.777 "nvme_io": false, 00:10:00.777 "nvme_io_md": false, 00:10:00.777 "write_zeroes": true, 00:10:00.777 "zcopy": true, 00:10:00.777 "get_zone_info": false, 00:10:00.777 "zone_management": false, 00:10:00.777 "zone_append": false, 00:10:00.777 "compare": false, 00:10:00.777 "compare_and_write": false, 00:10:00.777 "abort": true, 00:10:00.777 "seek_hole": false, 00:10:00.777 "seek_data": false, 00:10:00.777 "copy": true, 00:10:00.777 "nvme_iov_md": false 00:10:00.777 }, 00:10:00.777 "memory_domains": [ 00:10:00.777 { 00:10:00.777 "dma_device_id": "system", 00:10:00.777 "dma_device_type": 1 00:10:00.777 }, 00:10:00.777 { 00:10:00.777 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:00.777 "dma_device_type": 2 00:10:00.777 } 00:10:00.777 ], 00:10:00.777 "driver_specific": {} 00:10:00.777 } 00:10:00.777 ] 00:10:00.777 23:27:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:00.777 23:27:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:00.777 23:27:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:00.777 23:27:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:00.777 23:27:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:00.777 23:27:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:00.778 23:27:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:00.778 23:27:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:00.778 23:27:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:00.778 23:27:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:00.778 23:27:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:00.778 23:27:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:00.778 23:27:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:00.778 23:27:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:00.778 23:27:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:00.778 23:27:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:00.778 23:27:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:00.778 23:27:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.778 23:27:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:00.778 23:27:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:00.778 "name": "Existed_Raid", 00:10:00.778 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:00.778 "strip_size_kb": 64, 00:10:00.778 "state": "configuring", 00:10:00.778 "raid_level": "concat", 00:10:00.778 "superblock": false, 00:10:00.778 "num_base_bdevs": 4, 00:10:00.778 "num_base_bdevs_discovered": 2, 00:10:00.778 "num_base_bdevs_operational": 4, 00:10:00.778 "base_bdevs_list": [ 00:10:00.778 { 00:10:00.778 "name": "BaseBdev1", 00:10:00.778 "uuid": "25ddd139-bb00-4054-bf3f-9fd1e391de4b", 00:10:00.778 "is_configured": true, 00:10:00.778 "data_offset": 0, 00:10:00.778 "data_size": 65536 00:10:00.778 }, 00:10:00.778 { 00:10:00.778 "name": "BaseBdev2", 00:10:00.778 "uuid": "f1636f05-c903-4474-9bd1-5c97058c3576", 00:10:00.778 "is_configured": true, 00:10:00.778 "data_offset": 0, 00:10:00.778 "data_size": 65536 00:10:00.778 }, 00:10:00.778 { 00:10:00.778 "name": "BaseBdev3", 00:10:00.778 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:00.778 "is_configured": false, 00:10:00.778 "data_offset": 0, 00:10:00.778 "data_size": 0 00:10:00.778 }, 00:10:00.778 { 00:10:00.778 "name": "BaseBdev4", 00:10:00.778 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:00.778 "is_configured": false, 00:10:00.778 "data_offset": 0, 00:10:00.778 "data_size": 0 00:10:00.778 } 00:10:00.778 ] 00:10:00.778 }' 00:10:00.778 23:27:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:00.778 23:27:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.038 23:27:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:01.038 23:27:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.038 23:27:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.038 [2024-09-30 23:27:40.831735] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:01.038 BaseBdev3 00:10:01.038 23:27:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.038 23:27:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:10:01.038 23:27:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:10:01.038 23:27:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:01.038 23:27:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:01.038 23:27:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:01.038 23:27:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:01.038 23:27:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:01.038 23:27:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.038 23:27:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.038 23:27:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.038 23:27:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:01.038 23:27:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.038 23:27:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.038 [ 00:10:01.038 { 00:10:01.038 "name": "BaseBdev3", 00:10:01.038 "aliases": [ 00:10:01.038 "b8b858e4-92ad-436b-b716-c364c5170742" 00:10:01.038 ], 00:10:01.038 "product_name": "Malloc disk", 00:10:01.038 "block_size": 512, 00:10:01.038 "num_blocks": 65536, 00:10:01.038 "uuid": "b8b858e4-92ad-436b-b716-c364c5170742", 00:10:01.038 "assigned_rate_limits": { 00:10:01.038 "rw_ios_per_sec": 0, 00:10:01.038 "rw_mbytes_per_sec": 0, 00:10:01.038 "r_mbytes_per_sec": 0, 00:10:01.038 "w_mbytes_per_sec": 0 00:10:01.038 }, 00:10:01.038 "claimed": true, 00:10:01.038 "claim_type": "exclusive_write", 00:10:01.038 "zoned": false, 00:10:01.038 "supported_io_types": { 00:10:01.038 "read": true, 00:10:01.038 "write": true, 00:10:01.038 "unmap": true, 00:10:01.038 "flush": true, 00:10:01.038 "reset": true, 00:10:01.038 "nvme_admin": false, 00:10:01.038 "nvme_io": false, 00:10:01.038 "nvme_io_md": false, 00:10:01.038 "write_zeroes": true, 00:10:01.038 "zcopy": true, 00:10:01.038 "get_zone_info": false, 00:10:01.038 "zone_management": false, 00:10:01.038 "zone_append": false, 00:10:01.038 "compare": false, 00:10:01.038 "compare_and_write": false, 00:10:01.038 "abort": true, 00:10:01.038 "seek_hole": false, 00:10:01.038 "seek_data": false, 00:10:01.038 "copy": true, 00:10:01.038 "nvme_iov_md": false 00:10:01.038 }, 00:10:01.038 "memory_domains": [ 00:10:01.038 { 00:10:01.038 "dma_device_id": "system", 00:10:01.038 "dma_device_type": 1 00:10:01.038 }, 00:10:01.038 { 00:10:01.038 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:01.038 "dma_device_type": 2 00:10:01.038 } 00:10:01.038 ], 00:10:01.038 "driver_specific": {} 00:10:01.038 } 00:10:01.038 ] 00:10:01.038 23:27:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.038 23:27:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:01.038 23:27:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:01.038 23:27:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:01.038 23:27:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:01.038 23:27:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:01.038 23:27:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:01.038 23:27:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:01.038 23:27:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:01.038 23:27:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:01.038 23:27:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:01.038 23:27:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:01.038 23:27:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:01.038 23:27:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:01.038 23:27:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:01.038 23:27:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:01.038 23:27:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.038 23:27:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.298 23:27:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.298 23:27:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:01.298 "name": "Existed_Raid", 00:10:01.298 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:01.298 "strip_size_kb": 64, 00:10:01.298 "state": "configuring", 00:10:01.298 "raid_level": "concat", 00:10:01.298 "superblock": false, 00:10:01.298 "num_base_bdevs": 4, 00:10:01.298 "num_base_bdevs_discovered": 3, 00:10:01.298 "num_base_bdevs_operational": 4, 00:10:01.298 "base_bdevs_list": [ 00:10:01.298 { 00:10:01.298 "name": "BaseBdev1", 00:10:01.298 "uuid": "25ddd139-bb00-4054-bf3f-9fd1e391de4b", 00:10:01.298 "is_configured": true, 00:10:01.298 "data_offset": 0, 00:10:01.298 "data_size": 65536 00:10:01.298 }, 00:10:01.298 { 00:10:01.298 "name": "BaseBdev2", 00:10:01.298 "uuid": "f1636f05-c903-4474-9bd1-5c97058c3576", 00:10:01.298 "is_configured": true, 00:10:01.298 "data_offset": 0, 00:10:01.298 "data_size": 65536 00:10:01.298 }, 00:10:01.298 { 00:10:01.298 "name": "BaseBdev3", 00:10:01.298 "uuid": "b8b858e4-92ad-436b-b716-c364c5170742", 00:10:01.298 "is_configured": true, 00:10:01.298 "data_offset": 0, 00:10:01.298 "data_size": 65536 00:10:01.298 }, 00:10:01.298 { 00:10:01.298 "name": "BaseBdev4", 00:10:01.298 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:01.298 "is_configured": false, 00:10:01.298 "data_offset": 0, 00:10:01.298 "data_size": 0 00:10:01.298 } 00:10:01.298 ] 00:10:01.298 }' 00:10:01.298 23:27:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:01.298 23:27:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.558 23:27:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:01.558 23:27:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.558 23:27:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.558 [2024-09-30 23:27:41.270007] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:01.558 [2024-09-30 23:27:41.270063] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:10:01.558 [2024-09-30 23:27:41.270075] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:10:01.558 [2024-09-30 23:27:41.270373] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:10:01.558 [2024-09-30 23:27:41.270550] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:10:01.558 [2024-09-30 23:27:41.270573] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980 00:10:01.558 [2024-09-30 23:27:41.270801] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:01.558 BaseBdev4 00:10:01.558 23:27:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.558 23:27:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:10:01.558 23:27:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:10:01.558 23:27:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:01.558 23:27:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:01.558 23:27:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:01.558 23:27:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:01.558 23:27:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:01.558 23:27:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.558 23:27:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.558 23:27:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.558 23:27:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:01.558 23:27:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.558 23:27:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.558 [ 00:10:01.558 { 00:10:01.558 "name": "BaseBdev4", 00:10:01.558 "aliases": [ 00:10:01.558 "296d803c-2ed8-4df7-8fde-8a0354172f90" 00:10:01.558 ], 00:10:01.558 "product_name": "Malloc disk", 00:10:01.558 "block_size": 512, 00:10:01.558 "num_blocks": 65536, 00:10:01.558 "uuid": "296d803c-2ed8-4df7-8fde-8a0354172f90", 00:10:01.558 "assigned_rate_limits": { 00:10:01.558 "rw_ios_per_sec": 0, 00:10:01.558 "rw_mbytes_per_sec": 0, 00:10:01.558 "r_mbytes_per_sec": 0, 00:10:01.558 "w_mbytes_per_sec": 0 00:10:01.558 }, 00:10:01.558 "claimed": true, 00:10:01.558 "claim_type": "exclusive_write", 00:10:01.558 "zoned": false, 00:10:01.558 "supported_io_types": { 00:10:01.558 "read": true, 00:10:01.558 "write": true, 00:10:01.558 "unmap": true, 00:10:01.558 "flush": true, 00:10:01.558 "reset": true, 00:10:01.558 "nvme_admin": false, 00:10:01.558 "nvme_io": false, 00:10:01.558 "nvme_io_md": false, 00:10:01.558 "write_zeroes": true, 00:10:01.558 "zcopy": true, 00:10:01.558 "get_zone_info": false, 00:10:01.558 "zone_management": false, 00:10:01.558 "zone_append": false, 00:10:01.558 "compare": false, 00:10:01.558 "compare_and_write": false, 00:10:01.558 "abort": true, 00:10:01.558 "seek_hole": false, 00:10:01.558 "seek_data": false, 00:10:01.558 "copy": true, 00:10:01.558 "nvme_iov_md": false 00:10:01.558 }, 00:10:01.558 "memory_domains": [ 00:10:01.558 { 00:10:01.558 "dma_device_id": "system", 00:10:01.558 "dma_device_type": 1 00:10:01.558 }, 00:10:01.558 { 00:10:01.558 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:01.558 "dma_device_type": 2 00:10:01.558 } 00:10:01.558 ], 00:10:01.558 "driver_specific": {} 00:10:01.558 } 00:10:01.558 ] 00:10:01.558 23:27:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.558 23:27:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:01.558 23:27:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:01.558 23:27:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:01.558 23:27:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:10:01.558 23:27:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:01.558 23:27:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:01.558 23:27:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:01.558 23:27:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:01.558 23:27:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:01.558 23:27:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:01.558 23:27:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:01.558 23:27:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:01.558 23:27:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:01.558 23:27:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:01.558 23:27:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:01.558 23:27:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.558 23:27:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.558 23:27:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.558 23:27:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:01.558 "name": "Existed_Raid", 00:10:01.558 "uuid": "07513a51-fe6a-4dd6-8d12-fd0eb7269dd6", 00:10:01.558 "strip_size_kb": 64, 00:10:01.558 "state": "online", 00:10:01.558 "raid_level": "concat", 00:10:01.558 "superblock": false, 00:10:01.558 "num_base_bdevs": 4, 00:10:01.558 "num_base_bdevs_discovered": 4, 00:10:01.558 "num_base_bdevs_operational": 4, 00:10:01.558 "base_bdevs_list": [ 00:10:01.558 { 00:10:01.558 "name": "BaseBdev1", 00:10:01.558 "uuid": "25ddd139-bb00-4054-bf3f-9fd1e391de4b", 00:10:01.558 "is_configured": true, 00:10:01.558 "data_offset": 0, 00:10:01.558 "data_size": 65536 00:10:01.558 }, 00:10:01.558 { 00:10:01.558 "name": "BaseBdev2", 00:10:01.558 "uuid": "f1636f05-c903-4474-9bd1-5c97058c3576", 00:10:01.558 "is_configured": true, 00:10:01.558 "data_offset": 0, 00:10:01.558 "data_size": 65536 00:10:01.558 }, 00:10:01.558 { 00:10:01.558 "name": "BaseBdev3", 00:10:01.558 "uuid": "b8b858e4-92ad-436b-b716-c364c5170742", 00:10:01.558 "is_configured": true, 00:10:01.558 "data_offset": 0, 00:10:01.558 "data_size": 65536 00:10:01.558 }, 00:10:01.558 { 00:10:01.558 "name": "BaseBdev4", 00:10:01.558 "uuid": "296d803c-2ed8-4df7-8fde-8a0354172f90", 00:10:01.558 "is_configured": true, 00:10:01.558 "data_offset": 0, 00:10:01.558 "data_size": 65536 00:10:01.558 } 00:10:01.558 ] 00:10:01.558 }' 00:10:01.558 23:27:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:01.558 23:27:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.126 23:27:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:10:02.126 23:27:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:02.126 23:27:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:02.126 23:27:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:02.126 23:27:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:02.126 23:27:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:02.126 23:27:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:02.126 23:27:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:02.126 23:27:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:02.126 23:27:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.126 [2024-09-30 23:27:41.749572] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:02.126 23:27:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:02.126 23:27:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:02.126 "name": "Existed_Raid", 00:10:02.126 "aliases": [ 00:10:02.126 "07513a51-fe6a-4dd6-8d12-fd0eb7269dd6" 00:10:02.126 ], 00:10:02.126 "product_name": "Raid Volume", 00:10:02.126 "block_size": 512, 00:10:02.126 "num_blocks": 262144, 00:10:02.126 "uuid": "07513a51-fe6a-4dd6-8d12-fd0eb7269dd6", 00:10:02.126 "assigned_rate_limits": { 00:10:02.126 "rw_ios_per_sec": 0, 00:10:02.126 "rw_mbytes_per_sec": 0, 00:10:02.126 "r_mbytes_per_sec": 0, 00:10:02.126 "w_mbytes_per_sec": 0 00:10:02.126 }, 00:10:02.126 "claimed": false, 00:10:02.126 "zoned": false, 00:10:02.126 "supported_io_types": { 00:10:02.126 "read": true, 00:10:02.127 "write": true, 00:10:02.127 "unmap": true, 00:10:02.127 "flush": true, 00:10:02.127 "reset": true, 00:10:02.127 "nvme_admin": false, 00:10:02.127 "nvme_io": false, 00:10:02.127 "nvme_io_md": false, 00:10:02.127 "write_zeroes": true, 00:10:02.127 "zcopy": false, 00:10:02.127 "get_zone_info": false, 00:10:02.127 "zone_management": false, 00:10:02.127 "zone_append": false, 00:10:02.127 "compare": false, 00:10:02.127 "compare_and_write": false, 00:10:02.127 "abort": false, 00:10:02.127 "seek_hole": false, 00:10:02.127 "seek_data": false, 00:10:02.127 "copy": false, 00:10:02.127 "nvme_iov_md": false 00:10:02.127 }, 00:10:02.127 "memory_domains": [ 00:10:02.127 { 00:10:02.127 "dma_device_id": "system", 00:10:02.127 "dma_device_type": 1 00:10:02.127 }, 00:10:02.127 { 00:10:02.127 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:02.127 "dma_device_type": 2 00:10:02.127 }, 00:10:02.127 { 00:10:02.127 "dma_device_id": "system", 00:10:02.127 "dma_device_type": 1 00:10:02.127 }, 00:10:02.127 { 00:10:02.127 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:02.127 "dma_device_type": 2 00:10:02.127 }, 00:10:02.127 { 00:10:02.127 "dma_device_id": "system", 00:10:02.127 "dma_device_type": 1 00:10:02.127 }, 00:10:02.127 { 00:10:02.127 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:02.127 "dma_device_type": 2 00:10:02.127 }, 00:10:02.127 { 00:10:02.127 "dma_device_id": "system", 00:10:02.127 "dma_device_type": 1 00:10:02.127 }, 00:10:02.127 { 00:10:02.127 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:02.127 "dma_device_type": 2 00:10:02.127 } 00:10:02.127 ], 00:10:02.127 "driver_specific": { 00:10:02.127 "raid": { 00:10:02.127 "uuid": "07513a51-fe6a-4dd6-8d12-fd0eb7269dd6", 00:10:02.127 "strip_size_kb": 64, 00:10:02.127 "state": "online", 00:10:02.127 "raid_level": "concat", 00:10:02.127 "superblock": false, 00:10:02.127 "num_base_bdevs": 4, 00:10:02.127 "num_base_bdevs_discovered": 4, 00:10:02.127 "num_base_bdevs_operational": 4, 00:10:02.127 "base_bdevs_list": [ 00:10:02.127 { 00:10:02.127 "name": "BaseBdev1", 00:10:02.127 "uuid": "25ddd139-bb00-4054-bf3f-9fd1e391de4b", 00:10:02.127 "is_configured": true, 00:10:02.127 "data_offset": 0, 00:10:02.127 "data_size": 65536 00:10:02.127 }, 00:10:02.127 { 00:10:02.127 "name": "BaseBdev2", 00:10:02.127 "uuid": "f1636f05-c903-4474-9bd1-5c97058c3576", 00:10:02.127 "is_configured": true, 00:10:02.127 "data_offset": 0, 00:10:02.127 "data_size": 65536 00:10:02.127 }, 00:10:02.127 { 00:10:02.127 "name": "BaseBdev3", 00:10:02.127 "uuid": "b8b858e4-92ad-436b-b716-c364c5170742", 00:10:02.127 "is_configured": true, 00:10:02.127 "data_offset": 0, 00:10:02.127 "data_size": 65536 00:10:02.127 }, 00:10:02.127 { 00:10:02.127 "name": "BaseBdev4", 00:10:02.127 "uuid": "296d803c-2ed8-4df7-8fde-8a0354172f90", 00:10:02.127 "is_configured": true, 00:10:02.127 "data_offset": 0, 00:10:02.127 "data_size": 65536 00:10:02.127 } 00:10:02.127 ] 00:10:02.127 } 00:10:02.127 } 00:10:02.127 }' 00:10:02.127 23:27:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:02.127 23:27:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:10:02.127 BaseBdev2 00:10:02.127 BaseBdev3 00:10:02.127 BaseBdev4' 00:10:02.127 23:27:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:02.127 23:27:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:02.127 23:27:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:02.127 23:27:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:10:02.127 23:27:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:02.127 23:27:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:02.127 23:27:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.127 23:27:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:02.127 23:27:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:02.127 23:27:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:02.127 23:27:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:02.127 23:27:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:02.127 23:27:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:02.127 23:27:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.127 23:27:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:02.127 23:27:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:02.127 23:27:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:02.127 23:27:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:02.127 23:27:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:02.127 23:27:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:02.127 23:27:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:02.127 23:27:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:02.127 23:27:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.127 23:27:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:02.127 23:27:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:02.127 23:27:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:02.127 23:27:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:02.127 23:27:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:02.127 23:27:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:02.127 23:27:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:02.127 23:27:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.387 23:27:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:02.387 23:27:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:02.387 23:27:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:02.387 23:27:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:02.387 23:27:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:02.387 23:27:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.387 [2024-09-30 23:27:42.028827] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:02.387 [2024-09-30 23:27:42.028933] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:02.387 [2024-09-30 23:27:42.029013] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:02.387 23:27:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:02.387 23:27:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:10:02.387 23:27:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:10:02.387 23:27:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:02.387 23:27:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:02.387 23:27:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:10:02.387 23:27:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:10:02.387 23:27:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:02.387 23:27:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:10:02.387 23:27:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:02.387 23:27:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:02.387 23:27:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:02.387 23:27:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:02.387 23:27:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:02.387 23:27:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:02.387 23:27:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:02.387 23:27:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:02.387 23:27:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:02.387 23:27:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:02.387 23:27:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.387 23:27:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:02.387 23:27:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:02.387 "name": "Existed_Raid", 00:10:02.387 "uuid": "07513a51-fe6a-4dd6-8d12-fd0eb7269dd6", 00:10:02.387 "strip_size_kb": 64, 00:10:02.387 "state": "offline", 00:10:02.387 "raid_level": "concat", 00:10:02.387 "superblock": false, 00:10:02.387 "num_base_bdevs": 4, 00:10:02.387 "num_base_bdevs_discovered": 3, 00:10:02.387 "num_base_bdevs_operational": 3, 00:10:02.387 "base_bdevs_list": [ 00:10:02.387 { 00:10:02.387 "name": null, 00:10:02.387 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:02.387 "is_configured": false, 00:10:02.387 "data_offset": 0, 00:10:02.387 "data_size": 65536 00:10:02.387 }, 00:10:02.387 { 00:10:02.387 "name": "BaseBdev2", 00:10:02.387 "uuid": "f1636f05-c903-4474-9bd1-5c97058c3576", 00:10:02.387 "is_configured": true, 00:10:02.387 "data_offset": 0, 00:10:02.387 "data_size": 65536 00:10:02.387 }, 00:10:02.387 { 00:10:02.387 "name": "BaseBdev3", 00:10:02.387 "uuid": "b8b858e4-92ad-436b-b716-c364c5170742", 00:10:02.387 "is_configured": true, 00:10:02.387 "data_offset": 0, 00:10:02.387 "data_size": 65536 00:10:02.387 }, 00:10:02.387 { 00:10:02.387 "name": "BaseBdev4", 00:10:02.387 "uuid": "296d803c-2ed8-4df7-8fde-8a0354172f90", 00:10:02.387 "is_configured": true, 00:10:02.387 "data_offset": 0, 00:10:02.387 "data_size": 65536 00:10:02.387 } 00:10:02.387 ] 00:10:02.387 }' 00:10:02.387 23:27:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:02.387 23:27:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.646 23:27:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:10:02.646 23:27:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:02.646 23:27:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:02.646 23:27:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:02.646 23:27:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:02.646 23:27:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.646 23:27:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:02.646 23:27:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:02.646 23:27:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:02.646 23:27:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:10:02.646 23:27:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:02.646 23:27:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.646 [2024-09-30 23:27:42.459298] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:02.646 23:27:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:02.646 23:27:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:02.646 23:27:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:02.646 23:27:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:02.646 23:27:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:02.646 23:27:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:02.646 23:27:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.646 23:27:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:02.905 23:27:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:02.905 23:27:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:02.905 23:27:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:10:02.905 23:27:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:02.905 23:27:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.905 [2024-09-30 23:27:42.526339] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:02.905 23:27:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:02.905 23:27:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:02.905 23:27:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:02.905 23:27:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:02.905 23:27:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:02.905 23:27:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:02.905 23:27:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.905 23:27:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:02.905 23:27:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:02.905 23:27:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:02.905 23:27:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:10:02.905 23:27:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:02.906 23:27:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.906 [2024-09-30 23:27:42.593609] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:10:02.906 [2024-09-30 23:27:42.593657] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline 00:10:02.906 23:27:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:02.906 23:27:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:02.906 23:27:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:02.906 23:27:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:02.906 23:27:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:02.906 23:27:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:10:02.906 23:27:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.906 23:27:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:02.906 23:27:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:10:02.906 23:27:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:10:02.906 23:27:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:10:02.906 23:27:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:10:02.906 23:27:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:02.906 23:27:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:02.906 23:27:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:02.906 23:27:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.906 BaseBdev2 00:10:02.906 23:27:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:02.906 23:27:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:10:02.906 23:27:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:10:02.906 23:27:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:02.906 23:27:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:02.906 23:27:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:02.906 23:27:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:02.906 23:27:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:02.906 23:27:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:02.906 23:27:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.906 23:27:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:02.906 23:27:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:02.906 23:27:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:02.906 23:27:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.906 [ 00:10:02.906 { 00:10:02.906 "name": "BaseBdev2", 00:10:02.906 "aliases": [ 00:10:02.906 "f673b38d-35d4-490e-85f2-372a7a88df3b" 00:10:02.906 ], 00:10:02.906 "product_name": "Malloc disk", 00:10:02.906 "block_size": 512, 00:10:02.906 "num_blocks": 65536, 00:10:02.906 "uuid": "f673b38d-35d4-490e-85f2-372a7a88df3b", 00:10:02.906 "assigned_rate_limits": { 00:10:02.906 "rw_ios_per_sec": 0, 00:10:02.906 "rw_mbytes_per_sec": 0, 00:10:02.906 "r_mbytes_per_sec": 0, 00:10:02.906 "w_mbytes_per_sec": 0 00:10:02.906 }, 00:10:02.906 "claimed": false, 00:10:02.906 "zoned": false, 00:10:02.906 "supported_io_types": { 00:10:02.906 "read": true, 00:10:02.906 "write": true, 00:10:02.906 "unmap": true, 00:10:02.906 "flush": true, 00:10:02.906 "reset": true, 00:10:02.906 "nvme_admin": false, 00:10:02.906 "nvme_io": false, 00:10:02.906 "nvme_io_md": false, 00:10:02.906 "write_zeroes": true, 00:10:02.906 "zcopy": true, 00:10:02.906 "get_zone_info": false, 00:10:02.906 "zone_management": false, 00:10:02.906 "zone_append": false, 00:10:02.906 "compare": false, 00:10:02.906 "compare_and_write": false, 00:10:02.906 "abort": true, 00:10:02.906 "seek_hole": false, 00:10:02.906 "seek_data": false, 00:10:02.906 "copy": true, 00:10:02.906 "nvme_iov_md": false 00:10:02.906 }, 00:10:02.906 "memory_domains": [ 00:10:02.906 { 00:10:02.906 "dma_device_id": "system", 00:10:02.906 "dma_device_type": 1 00:10:02.906 }, 00:10:02.906 { 00:10:02.906 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:02.906 "dma_device_type": 2 00:10:02.906 } 00:10:02.906 ], 00:10:02.906 "driver_specific": {} 00:10:02.906 } 00:10:02.906 ] 00:10:02.906 23:27:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:02.906 23:27:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:02.906 23:27:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:02.906 23:27:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:02.906 23:27:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:02.906 23:27:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:02.906 23:27:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.906 BaseBdev3 00:10:02.906 23:27:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:02.906 23:27:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:10:02.906 23:27:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:10:02.906 23:27:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:02.906 23:27:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:02.906 23:27:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:02.906 23:27:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:02.906 23:27:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:02.906 23:27:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:02.906 23:27:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.906 23:27:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:02.906 23:27:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:02.906 23:27:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:02.906 23:27:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.906 [ 00:10:02.906 { 00:10:02.906 "name": "BaseBdev3", 00:10:02.906 "aliases": [ 00:10:02.906 "42e5ee6c-eecd-44be-b6ad-10d674ca4c5c" 00:10:02.906 ], 00:10:02.906 "product_name": "Malloc disk", 00:10:02.906 "block_size": 512, 00:10:02.906 "num_blocks": 65536, 00:10:02.906 "uuid": "42e5ee6c-eecd-44be-b6ad-10d674ca4c5c", 00:10:02.906 "assigned_rate_limits": { 00:10:02.906 "rw_ios_per_sec": 0, 00:10:02.906 "rw_mbytes_per_sec": 0, 00:10:02.906 "r_mbytes_per_sec": 0, 00:10:02.906 "w_mbytes_per_sec": 0 00:10:02.906 }, 00:10:02.906 "claimed": false, 00:10:02.906 "zoned": false, 00:10:02.906 "supported_io_types": { 00:10:02.906 "read": true, 00:10:02.906 "write": true, 00:10:02.906 "unmap": true, 00:10:02.906 "flush": true, 00:10:02.906 "reset": true, 00:10:03.165 "nvme_admin": false, 00:10:03.165 "nvme_io": false, 00:10:03.165 "nvme_io_md": false, 00:10:03.165 "write_zeroes": true, 00:10:03.165 "zcopy": true, 00:10:03.165 "get_zone_info": false, 00:10:03.165 "zone_management": false, 00:10:03.165 "zone_append": false, 00:10:03.165 "compare": false, 00:10:03.165 "compare_and_write": false, 00:10:03.165 "abort": true, 00:10:03.165 "seek_hole": false, 00:10:03.165 "seek_data": false, 00:10:03.165 "copy": true, 00:10:03.165 "nvme_iov_md": false 00:10:03.165 }, 00:10:03.165 "memory_domains": [ 00:10:03.165 { 00:10:03.165 "dma_device_id": "system", 00:10:03.165 "dma_device_type": 1 00:10:03.165 }, 00:10:03.165 { 00:10:03.165 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:03.165 "dma_device_type": 2 00:10:03.165 } 00:10:03.165 ], 00:10:03.165 "driver_specific": {} 00:10:03.165 } 00:10:03.165 ] 00:10:03.165 23:27:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:03.165 23:27:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:03.165 23:27:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:03.165 23:27:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:03.165 23:27:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:03.166 23:27:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:03.166 23:27:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.166 BaseBdev4 00:10:03.166 23:27:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:03.166 23:27:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:10:03.166 23:27:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:10:03.166 23:27:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:03.166 23:27:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:03.166 23:27:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:03.166 23:27:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:03.166 23:27:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:03.166 23:27:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:03.166 23:27:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.166 23:27:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:03.166 23:27:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:03.166 23:27:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:03.166 23:27:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.166 [ 00:10:03.166 { 00:10:03.166 "name": "BaseBdev4", 00:10:03.166 "aliases": [ 00:10:03.166 "a4c65b23-6dfb-4cdc-8b3f-811f1ede5283" 00:10:03.166 ], 00:10:03.166 "product_name": "Malloc disk", 00:10:03.166 "block_size": 512, 00:10:03.166 "num_blocks": 65536, 00:10:03.166 "uuid": "a4c65b23-6dfb-4cdc-8b3f-811f1ede5283", 00:10:03.166 "assigned_rate_limits": { 00:10:03.166 "rw_ios_per_sec": 0, 00:10:03.166 "rw_mbytes_per_sec": 0, 00:10:03.166 "r_mbytes_per_sec": 0, 00:10:03.166 "w_mbytes_per_sec": 0 00:10:03.166 }, 00:10:03.166 "claimed": false, 00:10:03.166 "zoned": false, 00:10:03.166 "supported_io_types": { 00:10:03.166 "read": true, 00:10:03.166 "write": true, 00:10:03.166 "unmap": true, 00:10:03.166 "flush": true, 00:10:03.166 "reset": true, 00:10:03.166 "nvme_admin": false, 00:10:03.166 "nvme_io": false, 00:10:03.166 "nvme_io_md": false, 00:10:03.166 "write_zeroes": true, 00:10:03.166 "zcopy": true, 00:10:03.166 "get_zone_info": false, 00:10:03.166 "zone_management": false, 00:10:03.166 "zone_append": false, 00:10:03.166 "compare": false, 00:10:03.166 "compare_and_write": false, 00:10:03.166 "abort": true, 00:10:03.166 "seek_hole": false, 00:10:03.166 "seek_data": false, 00:10:03.166 "copy": true, 00:10:03.166 "nvme_iov_md": false 00:10:03.166 }, 00:10:03.166 "memory_domains": [ 00:10:03.166 { 00:10:03.166 "dma_device_id": "system", 00:10:03.166 "dma_device_type": 1 00:10:03.166 }, 00:10:03.166 { 00:10:03.166 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:03.166 "dma_device_type": 2 00:10:03.166 } 00:10:03.166 ], 00:10:03.166 "driver_specific": {} 00:10:03.166 } 00:10:03.166 ] 00:10:03.166 23:27:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:03.166 23:27:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:03.166 23:27:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:03.166 23:27:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:03.166 23:27:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:03.166 23:27:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:03.166 23:27:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.166 [2024-09-30 23:27:42.825214] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:03.166 [2024-09-30 23:27:42.825302] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:03.166 [2024-09-30 23:27:42.825360] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:03.166 [2024-09-30 23:27:42.827191] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:03.166 [2024-09-30 23:27:42.827282] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:03.166 23:27:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:03.166 23:27:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:03.166 23:27:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:03.166 23:27:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:03.166 23:27:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:03.166 23:27:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:03.166 23:27:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:03.166 23:27:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:03.166 23:27:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:03.166 23:27:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:03.166 23:27:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:03.166 23:27:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:03.166 23:27:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:03.166 23:27:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:03.166 23:27:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.166 23:27:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:03.166 23:27:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:03.166 "name": "Existed_Raid", 00:10:03.166 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:03.166 "strip_size_kb": 64, 00:10:03.166 "state": "configuring", 00:10:03.166 "raid_level": "concat", 00:10:03.166 "superblock": false, 00:10:03.166 "num_base_bdevs": 4, 00:10:03.166 "num_base_bdevs_discovered": 3, 00:10:03.166 "num_base_bdevs_operational": 4, 00:10:03.166 "base_bdevs_list": [ 00:10:03.166 { 00:10:03.166 "name": "BaseBdev1", 00:10:03.166 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:03.166 "is_configured": false, 00:10:03.166 "data_offset": 0, 00:10:03.166 "data_size": 0 00:10:03.166 }, 00:10:03.166 { 00:10:03.166 "name": "BaseBdev2", 00:10:03.166 "uuid": "f673b38d-35d4-490e-85f2-372a7a88df3b", 00:10:03.166 "is_configured": true, 00:10:03.166 "data_offset": 0, 00:10:03.166 "data_size": 65536 00:10:03.166 }, 00:10:03.166 { 00:10:03.166 "name": "BaseBdev3", 00:10:03.166 "uuid": "42e5ee6c-eecd-44be-b6ad-10d674ca4c5c", 00:10:03.166 "is_configured": true, 00:10:03.166 "data_offset": 0, 00:10:03.166 "data_size": 65536 00:10:03.166 }, 00:10:03.166 { 00:10:03.166 "name": "BaseBdev4", 00:10:03.166 "uuid": "a4c65b23-6dfb-4cdc-8b3f-811f1ede5283", 00:10:03.166 "is_configured": true, 00:10:03.166 "data_offset": 0, 00:10:03.166 "data_size": 65536 00:10:03.166 } 00:10:03.166 ] 00:10:03.166 }' 00:10:03.166 23:27:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:03.166 23:27:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.425 23:27:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:10:03.425 23:27:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:03.425 23:27:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.425 [2024-09-30 23:27:43.204571] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:03.425 23:27:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:03.425 23:27:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:03.425 23:27:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:03.425 23:27:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:03.425 23:27:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:03.426 23:27:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:03.426 23:27:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:03.426 23:27:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:03.426 23:27:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:03.426 23:27:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:03.426 23:27:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:03.426 23:27:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:03.426 23:27:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:03.426 23:27:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:03.426 23:27:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.426 23:27:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:03.426 23:27:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:03.426 "name": "Existed_Raid", 00:10:03.426 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:03.426 "strip_size_kb": 64, 00:10:03.426 "state": "configuring", 00:10:03.426 "raid_level": "concat", 00:10:03.426 "superblock": false, 00:10:03.426 "num_base_bdevs": 4, 00:10:03.426 "num_base_bdevs_discovered": 2, 00:10:03.426 "num_base_bdevs_operational": 4, 00:10:03.426 "base_bdevs_list": [ 00:10:03.426 { 00:10:03.426 "name": "BaseBdev1", 00:10:03.426 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:03.426 "is_configured": false, 00:10:03.426 "data_offset": 0, 00:10:03.426 "data_size": 0 00:10:03.426 }, 00:10:03.426 { 00:10:03.426 "name": null, 00:10:03.426 "uuid": "f673b38d-35d4-490e-85f2-372a7a88df3b", 00:10:03.426 "is_configured": false, 00:10:03.426 "data_offset": 0, 00:10:03.426 "data_size": 65536 00:10:03.426 }, 00:10:03.426 { 00:10:03.426 "name": "BaseBdev3", 00:10:03.426 "uuid": "42e5ee6c-eecd-44be-b6ad-10d674ca4c5c", 00:10:03.426 "is_configured": true, 00:10:03.426 "data_offset": 0, 00:10:03.426 "data_size": 65536 00:10:03.426 }, 00:10:03.426 { 00:10:03.426 "name": "BaseBdev4", 00:10:03.426 "uuid": "a4c65b23-6dfb-4cdc-8b3f-811f1ede5283", 00:10:03.426 "is_configured": true, 00:10:03.426 "data_offset": 0, 00:10:03.426 "data_size": 65536 00:10:03.426 } 00:10:03.426 ] 00:10:03.426 }' 00:10:03.426 23:27:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:03.426 23:27:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.995 23:27:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:03.996 23:27:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:03.996 23:27:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.996 23:27:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:03.996 23:27:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:03.996 23:27:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:10:03.996 23:27:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:03.996 23:27:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:03.996 23:27:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.996 BaseBdev1 00:10:03.996 [2024-09-30 23:27:43.666701] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:03.996 23:27:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:03.996 23:27:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:10:03.996 23:27:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:10:03.996 23:27:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:03.996 23:27:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:03.996 23:27:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:03.996 23:27:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:03.996 23:27:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:03.996 23:27:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:03.996 23:27:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.996 23:27:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:03.996 23:27:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:03.996 23:27:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:03.996 23:27:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.996 [ 00:10:03.996 { 00:10:03.996 "name": "BaseBdev1", 00:10:03.996 "aliases": [ 00:10:03.996 "dca6b9fa-7e43-4dbe-8a56-14aa850bc81e" 00:10:03.996 ], 00:10:03.996 "product_name": "Malloc disk", 00:10:03.996 "block_size": 512, 00:10:03.996 "num_blocks": 65536, 00:10:03.996 "uuid": "dca6b9fa-7e43-4dbe-8a56-14aa850bc81e", 00:10:03.996 "assigned_rate_limits": { 00:10:03.996 "rw_ios_per_sec": 0, 00:10:03.996 "rw_mbytes_per_sec": 0, 00:10:03.996 "r_mbytes_per_sec": 0, 00:10:03.996 "w_mbytes_per_sec": 0 00:10:03.996 }, 00:10:03.996 "claimed": true, 00:10:03.996 "claim_type": "exclusive_write", 00:10:03.996 "zoned": false, 00:10:03.996 "supported_io_types": { 00:10:03.996 "read": true, 00:10:03.996 "write": true, 00:10:03.996 "unmap": true, 00:10:03.996 "flush": true, 00:10:03.996 "reset": true, 00:10:03.996 "nvme_admin": false, 00:10:03.996 "nvme_io": false, 00:10:03.996 "nvme_io_md": false, 00:10:03.996 "write_zeroes": true, 00:10:03.996 "zcopy": true, 00:10:03.996 "get_zone_info": false, 00:10:03.996 "zone_management": false, 00:10:03.996 "zone_append": false, 00:10:03.996 "compare": false, 00:10:03.996 "compare_and_write": false, 00:10:03.996 "abort": true, 00:10:03.996 "seek_hole": false, 00:10:03.996 "seek_data": false, 00:10:03.996 "copy": true, 00:10:03.996 "nvme_iov_md": false 00:10:03.996 }, 00:10:03.996 "memory_domains": [ 00:10:03.996 { 00:10:03.996 "dma_device_id": "system", 00:10:03.996 "dma_device_type": 1 00:10:03.996 }, 00:10:03.996 { 00:10:03.996 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:03.996 "dma_device_type": 2 00:10:03.996 } 00:10:03.996 ], 00:10:03.996 "driver_specific": {} 00:10:03.996 } 00:10:03.996 ] 00:10:03.996 23:27:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:03.996 23:27:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:03.996 23:27:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:03.996 23:27:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:03.996 23:27:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:03.996 23:27:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:03.996 23:27:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:03.996 23:27:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:03.996 23:27:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:03.996 23:27:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:03.996 23:27:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:03.996 23:27:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:03.996 23:27:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:03.996 23:27:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:03.996 23:27:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:03.996 23:27:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.996 23:27:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:03.996 23:27:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:03.996 "name": "Existed_Raid", 00:10:03.996 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:03.996 "strip_size_kb": 64, 00:10:03.996 "state": "configuring", 00:10:03.996 "raid_level": "concat", 00:10:03.996 "superblock": false, 00:10:03.996 "num_base_bdevs": 4, 00:10:03.996 "num_base_bdevs_discovered": 3, 00:10:03.996 "num_base_bdevs_operational": 4, 00:10:03.996 "base_bdevs_list": [ 00:10:03.996 { 00:10:03.996 "name": "BaseBdev1", 00:10:03.996 "uuid": "dca6b9fa-7e43-4dbe-8a56-14aa850bc81e", 00:10:03.996 "is_configured": true, 00:10:03.996 "data_offset": 0, 00:10:03.996 "data_size": 65536 00:10:03.996 }, 00:10:03.996 { 00:10:03.996 "name": null, 00:10:03.996 "uuid": "f673b38d-35d4-490e-85f2-372a7a88df3b", 00:10:03.996 "is_configured": false, 00:10:03.996 "data_offset": 0, 00:10:03.996 "data_size": 65536 00:10:03.996 }, 00:10:03.996 { 00:10:03.996 "name": "BaseBdev3", 00:10:03.996 "uuid": "42e5ee6c-eecd-44be-b6ad-10d674ca4c5c", 00:10:03.996 "is_configured": true, 00:10:03.996 "data_offset": 0, 00:10:03.996 "data_size": 65536 00:10:03.996 }, 00:10:03.996 { 00:10:03.996 "name": "BaseBdev4", 00:10:03.996 "uuid": "a4c65b23-6dfb-4cdc-8b3f-811f1ede5283", 00:10:03.996 "is_configured": true, 00:10:03.996 "data_offset": 0, 00:10:03.996 "data_size": 65536 00:10:03.996 } 00:10:03.996 ] 00:10:03.996 }' 00:10:03.996 23:27:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:03.996 23:27:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.565 23:27:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:04.565 23:27:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:04.565 23:27:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:04.565 23:27:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.565 23:27:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:04.565 23:27:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:10:04.565 23:27:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:10:04.565 23:27:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:04.565 23:27:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.565 [2024-09-30 23:27:44.213809] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:04.565 23:27:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:04.565 23:27:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:04.565 23:27:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:04.565 23:27:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:04.565 23:27:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:04.565 23:27:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:04.565 23:27:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:04.565 23:27:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:04.565 23:27:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:04.565 23:27:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:04.566 23:27:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:04.566 23:27:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:04.566 23:27:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:04.566 23:27:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:04.566 23:27:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.566 23:27:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:04.566 23:27:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:04.566 "name": "Existed_Raid", 00:10:04.566 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:04.566 "strip_size_kb": 64, 00:10:04.566 "state": "configuring", 00:10:04.566 "raid_level": "concat", 00:10:04.566 "superblock": false, 00:10:04.566 "num_base_bdevs": 4, 00:10:04.566 "num_base_bdevs_discovered": 2, 00:10:04.566 "num_base_bdevs_operational": 4, 00:10:04.566 "base_bdevs_list": [ 00:10:04.566 { 00:10:04.566 "name": "BaseBdev1", 00:10:04.566 "uuid": "dca6b9fa-7e43-4dbe-8a56-14aa850bc81e", 00:10:04.566 "is_configured": true, 00:10:04.566 "data_offset": 0, 00:10:04.566 "data_size": 65536 00:10:04.566 }, 00:10:04.566 { 00:10:04.566 "name": null, 00:10:04.566 "uuid": "f673b38d-35d4-490e-85f2-372a7a88df3b", 00:10:04.566 "is_configured": false, 00:10:04.566 "data_offset": 0, 00:10:04.566 "data_size": 65536 00:10:04.566 }, 00:10:04.566 { 00:10:04.566 "name": null, 00:10:04.566 "uuid": "42e5ee6c-eecd-44be-b6ad-10d674ca4c5c", 00:10:04.566 "is_configured": false, 00:10:04.566 "data_offset": 0, 00:10:04.566 "data_size": 65536 00:10:04.566 }, 00:10:04.566 { 00:10:04.566 "name": "BaseBdev4", 00:10:04.566 "uuid": "a4c65b23-6dfb-4cdc-8b3f-811f1ede5283", 00:10:04.566 "is_configured": true, 00:10:04.566 "data_offset": 0, 00:10:04.566 "data_size": 65536 00:10:04.566 } 00:10:04.566 ] 00:10:04.566 }' 00:10:04.566 23:27:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:04.566 23:27:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.151 23:27:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:05.151 23:27:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.151 23:27:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.151 23:27:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:05.151 23:27:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.151 23:27:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:10:05.151 23:27:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:10:05.151 23:27:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.151 23:27:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.151 [2024-09-30 23:27:44.729028] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:05.151 23:27:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.151 23:27:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:05.151 23:27:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:05.151 23:27:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:05.151 23:27:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:05.151 23:27:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:05.151 23:27:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:05.151 23:27:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:05.151 23:27:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:05.151 23:27:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:05.151 23:27:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:05.151 23:27:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:05.151 23:27:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.151 23:27:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.151 23:27:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:05.151 23:27:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.151 23:27:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:05.151 "name": "Existed_Raid", 00:10:05.151 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:05.151 "strip_size_kb": 64, 00:10:05.151 "state": "configuring", 00:10:05.151 "raid_level": "concat", 00:10:05.151 "superblock": false, 00:10:05.151 "num_base_bdevs": 4, 00:10:05.151 "num_base_bdevs_discovered": 3, 00:10:05.151 "num_base_bdevs_operational": 4, 00:10:05.151 "base_bdevs_list": [ 00:10:05.151 { 00:10:05.151 "name": "BaseBdev1", 00:10:05.151 "uuid": "dca6b9fa-7e43-4dbe-8a56-14aa850bc81e", 00:10:05.151 "is_configured": true, 00:10:05.151 "data_offset": 0, 00:10:05.151 "data_size": 65536 00:10:05.151 }, 00:10:05.151 { 00:10:05.151 "name": null, 00:10:05.151 "uuid": "f673b38d-35d4-490e-85f2-372a7a88df3b", 00:10:05.151 "is_configured": false, 00:10:05.151 "data_offset": 0, 00:10:05.151 "data_size": 65536 00:10:05.151 }, 00:10:05.151 { 00:10:05.151 "name": "BaseBdev3", 00:10:05.151 "uuid": "42e5ee6c-eecd-44be-b6ad-10d674ca4c5c", 00:10:05.151 "is_configured": true, 00:10:05.151 "data_offset": 0, 00:10:05.151 "data_size": 65536 00:10:05.151 }, 00:10:05.151 { 00:10:05.151 "name": "BaseBdev4", 00:10:05.151 "uuid": "a4c65b23-6dfb-4cdc-8b3f-811f1ede5283", 00:10:05.151 "is_configured": true, 00:10:05.151 "data_offset": 0, 00:10:05.151 "data_size": 65536 00:10:05.151 } 00:10:05.151 ] 00:10:05.151 }' 00:10:05.151 23:27:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:05.151 23:27:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.410 23:27:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:05.411 23:27:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.411 23:27:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.411 23:27:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:05.411 23:27:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.411 23:27:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:10:05.411 23:27:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:05.411 23:27:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.411 23:27:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.411 [2024-09-30 23:27:45.220190] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:05.411 23:27:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.411 23:27:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:05.411 23:27:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:05.411 23:27:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:05.411 23:27:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:05.411 23:27:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:05.411 23:27:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:05.411 23:27:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:05.411 23:27:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:05.411 23:27:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:05.411 23:27:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:05.411 23:27:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:05.411 23:27:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:05.411 23:27:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.411 23:27:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.411 23:27:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.669 23:27:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:05.669 "name": "Existed_Raid", 00:10:05.669 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:05.669 "strip_size_kb": 64, 00:10:05.669 "state": "configuring", 00:10:05.669 "raid_level": "concat", 00:10:05.669 "superblock": false, 00:10:05.669 "num_base_bdevs": 4, 00:10:05.669 "num_base_bdevs_discovered": 2, 00:10:05.669 "num_base_bdevs_operational": 4, 00:10:05.669 "base_bdevs_list": [ 00:10:05.669 { 00:10:05.669 "name": null, 00:10:05.669 "uuid": "dca6b9fa-7e43-4dbe-8a56-14aa850bc81e", 00:10:05.669 "is_configured": false, 00:10:05.669 "data_offset": 0, 00:10:05.669 "data_size": 65536 00:10:05.669 }, 00:10:05.669 { 00:10:05.669 "name": null, 00:10:05.669 "uuid": "f673b38d-35d4-490e-85f2-372a7a88df3b", 00:10:05.669 "is_configured": false, 00:10:05.669 "data_offset": 0, 00:10:05.669 "data_size": 65536 00:10:05.669 }, 00:10:05.669 { 00:10:05.670 "name": "BaseBdev3", 00:10:05.670 "uuid": "42e5ee6c-eecd-44be-b6ad-10d674ca4c5c", 00:10:05.670 "is_configured": true, 00:10:05.670 "data_offset": 0, 00:10:05.670 "data_size": 65536 00:10:05.670 }, 00:10:05.670 { 00:10:05.670 "name": "BaseBdev4", 00:10:05.670 "uuid": "a4c65b23-6dfb-4cdc-8b3f-811f1ede5283", 00:10:05.670 "is_configured": true, 00:10:05.670 "data_offset": 0, 00:10:05.670 "data_size": 65536 00:10:05.670 } 00:10:05.670 ] 00:10:05.670 }' 00:10:05.670 23:27:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:05.670 23:27:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.929 23:27:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:05.929 23:27:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.929 23:27:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.929 23:27:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:05.929 23:27:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.929 23:27:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:10:05.929 23:27:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:10:05.929 23:27:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.929 23:27:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.929 [2024-09-30 23:27:45.718004] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:05.929 23:27:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.929 23:27:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:05.929 23:27:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:05.929 23:27:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:05.929 23:27:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:05.929 23:27:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:05.929 23:27:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:05.929 23:27:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:05.929 23:27:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:05.929 23:27:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:05.929 23:27:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:05.929 23:27:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:05.929 23:27:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:05.929 23:27:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.929 23:27:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.929 23:27:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.929 23:27:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:05.929 "name": "Existed_Raid", 00:10:05.929 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:05.929 "strip_size_kb": 64, 00:10:05.929 "state": "configuring", 00:10:05.929 "raid_level": "concat", 00:10:05.929 "superblock": false, 00:10:05.929 "num_base_bdevs": 4, 00:10:05.929 "num_base_bdevs_discovered": 3, 00:10:05.929 "num_base_bdevs_operational": 4, 00:10:05.929 "base_bdevs_list": [ 00:10:05.929 { 00:10:05.929 "name": null, 00:10:05.929 "uuid": "dca6b9fa-7e43-4dbe-8a56-14aa850bc81e", 00:10:05.929 "is_configured": false, 00:10:05.929 "data_offset": 0, 00:10:05.929 "data_size": 65536 00:10:05.929 }, 00:10:05.929 { 00:10:05.929 "name": "BaseBdev2", 00:10:05.929 "uuid": "f673b38d-35d4-490e-85f2-372a7a88df3b", 00:10:05.929 "is_configured": true, 00:10:05.929 "data_offset": 0, 00:10:05.929 "data_size": 65536 00:10:05.929 }, 00:10:05.929 { 00:10:05.929 "name": "BaseBdev3", 00:10:05.929 "uuid": "42e5ee6c-eecd-44be-b6ad-10d674ca4c5c", 00:10:05.929 "is_configured": true, 00:10:05.929 "data_offset": 0, 00:10:05.929 "data_size": 65536 00:10:05.929 }, 00:10:05.929 { 00:10:05.929 "name": "BaseBdev4", 00:10:05.929 "uuid": "a4c65b23-6dfb-4cdc-8b3f-811f1ede5283", 00:10:05.929 "is_configured": true, 00:10:05.929 "data_offset": 0, 00:10:05.929 "data_size": 65536 00:10:05.929 } 00:10:05.929 ] 00:10:05.929 }' 00:10:05.929 23:27:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:05.929 23:27:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.496 23:27:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:06.496 23:27:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:06.496 23:27:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.496 23:27:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.496 23:27:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.496 23:27:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:10:06.496 23:27:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:06.496 23:27:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.496 23:27:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:10:06.496 23:27:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.496 23:27:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.496 23:27:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u dca6b9fa-7e43-4dbe-8a56-14aa850bc81e 00:10:06.496 23:27:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.496 23:27:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.496 [2024-09-30 23:27:46.212078] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:10:06.496 [2024-09-30 23:27:46.212207] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:10:06.496 [2024-09-30 23:27:46.212232] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:10:06.496 [2024-09-30 23:27:46.212530] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:10:06.496 [2024-09-30 23:27:46.212690] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:10:06.496 [2024-09-30 23:27:46.212735] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006d00 00:10:06.496 [2024-09-30 23:27:46.212971] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:06.496 NewBaseBdev 00:10:06.496 23:27:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.496 23:27:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:10:06.496 23:27:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:10:06.496 23:27:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:06.496 23:27:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:06.496 23:27:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:06.496 23:27:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:06.496 23:27:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:06.496 23:27:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.496 23:27:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.496 23:27:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.496 23:27:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:10:06.496 23:27:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.496 23:27:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.496 [ 00:10:06.496 { 00:10:06.496 "name": "NewBaseBdev", 00:10:06.496 "aliases": [ 00:10:06.496 "dca6b9fa-7e43-4dbe-8a56-14aa850bc81e" 00:10:06.496 ], 00:10:06.496 "product_name": "Malloc disk", 00:10:06.496 "block_size": 512, 00:10:06.496 "num_blocks": 65536, 00:10:06.496 "uuid": "dca6b9fa-7e43-4dbe-8a56-14aa850bc81e", 00:10:06.496 "assigned_rate_limits": { 00:10:06.496 "rw_ios_per_sec": 0, 00:10:06.496 "rw_mbytes_per_sec": 0, 00:10:06.496 "r_mbytes_per_sec": 0, 00:10:06.496 "w_mbytes_per_sec": 0 00:10:06.496 }, 00:10:06.496 "claimed": true, 00:10:06.496 "claim_type": "exclusive_write", 00:10:06.496 "zoned": false, 00:10:06.496 "supported_io_types": { 00:10:06.496 "read": true, 00:10:06.496 "write": true, 00:10:06.496 "unmap": true, 00:10:06.496 "flush": true, 00:10:06.496 "reset": true, 00:10:06.496 "nvme_admin": false, 00:10:06.496 "nvme_io": false, 00:10:06.496 "nvme_io_md": false, 00:10:06.496 "write_zeroes": true, 00:10:06.496 "zcopy": true, 00:10:06.496 "get_zone_info": false, 00:10:06.496 "zone_management": false, 00:10:06.496 "zone_append": false, 00:10:06.496 "compare": false, 00:10:06.496 "compare_and_write": false, 00:10:06.496 "abort": true, 00:10:06.496 "seek_hole": false, 00:10:06.496 "seek_data": false, 00:10:06.496 "copy": true, 00:10:06.496 "nvme_iov_md": false 00:10:06.496 }, 00:10:06.496 "memory_domains": [ 00:10:06.496 { 00:10:06.496 "dma_device_id": "system", 00:10:06.496 "dma_device_type": 1 00:10:06.496 }, 00:10:06.496 { 00:10:06.496 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:06.496 "dma_device_type": 2 00:10:06.496 } 00:10:06.496 ], 00:10:06.496 "driver_specific": {} 00:10:06.496 } 00:10:06.496 ] 00:10:06.496 23:27:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.496 23:27:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:06.496 23:27:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:10:06.496 23:27:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:06.496 23:27:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:06.496 23:27:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:06.496 23:27:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:06.496 23:27:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:06.496 23:27:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:06.496 23:27:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:06.496 23:27:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:06.496 23:27:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:06.496 23:27:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:06.496 23:27:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.496 23:27:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.496 23:27:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:06.496 23:27:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.496 23:27:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:06.496 "name": "Existed_Raid", 00:10:06.496 "uuid": "11e638cb-25af-445c-a3da-262390612783", 00:10:06.496 "strip_size_kb": 64, 00:10:06.496 "state": "online", 00:10:06.496 "raid_level": "concat", 00:10:06.496 "superblock": false, 00:10:06.496 "num_base_bdevs": 4, 00:10:06.496 "num_base_bdevs_discovered": 4, 00:10:06.496 "num_base_bdevs_operational": 4, 00:10:06.496 "base_bdevs_list": [ 00:10:06.496 { 00:10:06.496 "name": "NewBaseBdev", 00:10:06.496 "uuid": "dca6b9fa-7e43-4dbe-8a56-14aa850bc81e", 00:10:06.496 "is_configured": true, 00:10:06.496 "data_offset": 0, 00:10:06.496 "data_size": 65536 00:10:06.496 }, 00:10:06.496 { 00:10:06.496 "name": "BaseBdev2", 00:10:06.496 "uuid": "f673b38d-35d4-490e-85f2-372a7a88df3b", 00:10:06.496 "is_configured": true, 00:10:06.496 "data_offset": 0, 00:10:06.496 "data_size": 65536 00:10:06.496 }, 00:10:06.496 { 00:10:06.496 "name": "BaseBdev3", 00:10:06.496 "uuid": "42e5ee6c-eecd-44be-b6ad-10d674ca4c5c", 00:10:06.497 "is_configured": true, 00:10:06.497 "data_offset": 0, 00:10:06.497 "data_size": 65536 00:10:06.497 }, 00:10:06.497 { 00:10:06.497 "name": "BaseBdev4", 00:10:06.497 "uuid": "a4c65b23-6dfb-4cdc-8b3f-811f1ede5283", 00:10:06.497 "is_configured": true, 00:10:06.497 "data_offset": 0, 00:10:06.497 "data_size": 65536 00:10:06.497 } 00:10:06.497 ] 00:10:06.497 }' 00:10:06.497 23:27:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:06.497 23:27:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.064 23:27:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:10:07.064 23:27:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:07.064 23:27:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:07.064 23:27:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:07.064 23:27:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:07.064 23:27:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:07.064 23:27:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:07.064 23:27:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.064 23:27:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.064 23:27:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:07.064 [2024-09-30 23:27:46.683649] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:07.064 23:27:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.064 23:27:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:07.064 "name": "Existed_Raid", 00:10:07.064 "aliases": [ 00:10:07.064 "11e638cb-25af-445c-a3da-262390612783" 00:10:07.064 ], 00:10:07.064 "product_name": "Raid Volume", 00:10:07.064 "block_size": 512, 00:10:07.064 "num_blocks": 262144, 00:10:07.064 "uuid": "11e638cb-25af-445c-a3da-262390612783", 00:10:07.064 "assigned_rate_limits": { 00:10:07.064 "rw_ios_per_sec": 0, 00:10:07.064 "rw_mbytes_per_sec": 0, 00:10:07.064 "r_mbytes_per_sec": 0, 00:10:07.064 "w_mbytes_per_sec": 0 00:10:07.064 }, 00:10:07.064 "claimed": false, 00:10:07.064 "zoned": false, 00:10:07.064 "supported_io_types": { 00:10:07.064 "read": true, 00:10:07.064 "write": true, 00:10:07.064 "unmap": true, 00:10:07.064 "flush": true, 00:10:07.064 "reset": true, 00:10:07.064 "nvme_admin": false, 00:10:07.064 "nvme_io": false, 00:10:07.064 "nvme_io_md": false, 00:10:07.064 "write_zeroes": true, 00:10:07.064 "zcopy": false, 00:10:07.064 "get_zone_info": false, 00:10:07.064 "zone_management": false, 00:10:07.064 "zone_append": false, 00:10:07.064 "compare": false, 00:10:07.064 "compare_and_write": false, 00:10:07.064 "abort": false, 00:10:07.064 "seek_hole": false, 00:10:07.064 "seek_data": false, 00:10:07.064 "copy": false, 00:10:07.064 "nvme_iov_md": false 00:10:07.064 }, 00:10:07.064 "memory_domains": [ 00:10:07.064 { 00:10:07.064 "dma_device_id": "system", 00:10:07.064 "dma_device_type": 1 00:10:07.064 }, 00:10:07.064 { 00:10:07.064 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:07.064 "dma_device_type": 2 00:10:07.064 }, 00:10:07.064 { 00:10:07.064 "dma_device_id": "system", 00:10:07.064 "dma_device_type": 1 00:10:07.064 }, 00:10:07.064 { 00:10:07.064 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:07.064 "dma_device_type": 2 00:10:07.064 }, 00:10:07.064 { 00:10:07.064 "dma_device_id": "system", 00:10:07.064 "dma_device_type": 1 00:10:07.064 }, 00:10:07.064 { 00:10:07.064 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:07.064 "dma_device_type": 2 00:10:07.064 }, 00:10:07.064 { 00:10:07.064 "dma_device_id": "system", 00:10:07.064 "dma_device_type": 1 00:10:07.064 }, 00:10:07.064 { 00:10:07.064 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:07.064 "dma_device_type": 2 00:10:07.064 } 00:10:07.064 ], 00:10:07.064 "driver_specific": { 00:10:07.064 "raid": { 00:10:07.064 "uuid": "11e638cb-25af-445c-a3da-262390612783", 00:10:07.064 "strip_size_kb": 64, 00:10:07.064 "state": "online", 00:10:07.064 "raid_level": "concat", 00:10:07.064 "superblock": false, 00:10:07.064 "num_base_bdevs": 4, 00:10:07.064 "num_base_bdevs_discovered": 4, 00:10:07.064 "num_base_bdevs_operational": 4, 00:10:07.064 "base_bdevs_list": [ 00:10:07.064 { 00:10:07.064 "name": "NewBaseBdev", 00:10:07.064 "uuid": "dca6b9fa-7e43-4dbe-8a56-14aa850bc81e", 00:10:07.064 "is_configured": true, 00:10:07.064 "data_offset": 0, 00:10:07.064 "data_size": 65536 00:10:07.064 }, 00:10:07.064 { 00:10:07.064 "name": "BaseBdev2", 00:10:07.064 "uuid": "f673b38d-35d4-490e-85f2-372a7a88df3b", 00:10:07.064 "is_configured": true, 00:10:07.064 "data_offset": 0, 00:10:07.064 "data_size": 65536 00:10:07.064 }, 00:10:07.064 { 00:10:07.064 "name": "BaseBdev3", 00:10:07.064 "uuid": "42e5ee6c-eecd-44be-b6ad-10d674ca4c5c", 00:10:07.064 "is_configured": true, 00:10:07.064 "data_offset": 0, 00:10:07.064 "data_size": 65536 00:10:07.064 }, 00:10:07.064 { 00:10:07.064 "name": "BaseBdev4", 00:10:07.064 "uuid": "a4c65b23-6dfb-4cdc-8b3f-811f1ede5283", 00:10:07.064 "is_configured": true, 00:10:07.064 "data_offset": 0, 00:10:07.064 "data_size": 65536 00:10:07.064 } 00:10:07.064 ] 00:10:07.064 } 00:10:07.064 } 00:10:07.064 }' 00:10:07.064 23:27:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:07.064 23:27:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:10:07.064 BaseBdev2 00:10:07.064 BaseBdev3 00:10:07.064 BaseBdev4' 00:10:07.064 23:27:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:07.064 23:27:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:07.064 23:27:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:07.064 23:27:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:10:07.064 23:27:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:07.064 23:27:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.064 23:27:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.064 23:27:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.064 23:27:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:07.064 23:27:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:07.064 23:27:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:07.064 23:27:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:07.064 23:27:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:07.064 23:27:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.064 23:27:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.064 23:27:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.064 23:27:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:07.064 23:27:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:07.064 23:27:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:07.064 23:27:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:07.064 23:27:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.064 23:27:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.064 23:27:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:07.323 23:27:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.323 23:27:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:07.323 23:27:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:07.323 23:27:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:07.323 23:27:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:07.323 23:27:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:07.323 23:27:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.323 23:27:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.323 23:27:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.323 23:27:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:07.323 23:27:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:07.323 23:27:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:07.323 23:27:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.323 23:27:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.323 [2024-09-30 23:27:47.014805] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:07.323 [2024-09-30 23:27:47.014834] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:07.323 [2024-09-30 23:27:47.014924] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:07.323 [2024-09-30 23:27:47.014998] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:07.323 [2024-09-30 23:27:47.015008] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name Existed_Raid, state offline 00:10:07.323 23:27:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.323 23:27:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 82214 00:10:07.323 23:27:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 82214 ']' 00:10:07.323 23:27:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 82214 00:10:07.323 23:27:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:10:07.323 23:27:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:07.323 23:27:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 82214 00:10:07.323 killing process with pid 82214 00:10:07.323 23:27:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:07.323 23:27:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:07.323 23:27:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 82214' 00:10:07.323 23:27:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 82214 00:10:07.323 [2024-09-30 23:27:47.067577] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:07.323 23:27:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 82214 00:10:07.323 [2024-09-30 23:27:47.108423] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:07.581 23:27:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:10:07.581 00:10:07.581 real 0m9.325s 00:10:07.581 user 0m15.844s 00:10:07.581 sys 0m1.999s 00:10:07.581 ************************************ 00:10:07.582 END TEST raid_state_function_test 00:10:07.582 ************************************ 00:10:07.582 23:27:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:07.582 23:27:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.582 23:27:47 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 4 true 00:10:07.582 23:27:47 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:10:07.582 23:27:47 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:07.582 23:27:47 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:07.582 ************************************ 00:10:07.582 START TEST raid_state_function_test_sb 00:10:07.582 ************************************ 00:10:07.582 23:27:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test concat 4 true 00:10:07.582 23:27:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:10:07.582 23:27:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:10:07.582 23:27:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:10:07.582 23:27:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:10:07.582 23:27:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:10:07.582 23:27:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:07.582 23:27:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:10:07.840 23:27:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:07.840 23:27:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:07.840 23:27:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:10:07.840 23:27:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:07.840 23:27:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:07.840 23:27:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:10:07.840 23:27:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:07.840 23:27:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:07.840 23:27:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:10:07.840 23:27:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:07.840 23:27:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:07.840 23:27:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:10:07.840 23:27:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:10:07.840 23:27:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:10:07.840 23:27:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:10:07.840 23:27:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:10:07.840 23:27:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:10:07.840 23:27:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:10:07.840 23:27:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:10:07.840 23:27:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:10:07.840 23:27:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:10:07.840 23:27:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:10:07.840 23:27:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=82862 00:10:07.840 23:27:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:07.840 23:27:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 82862' 00:10:07.840 Process raid pid: 82862 00:10:07.840 23:27:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 82862 00:10:07.840 23:27:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 82862 ']' 00:10:07.840 23:27:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:07.840 23:27:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:07.840 23:27:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:07.840 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:07.840 23:27:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:07.841 23:27:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:07.841 [2024-09-30 23:27:47.525626] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:10:07.841 [2024-09-30 23:27:47.525801] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:07.841 [2024-09-30 23:27:47.687471] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:08.100 [2024-09-30 23:27:47.731966] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:08.100 [2024-09-30 23:27:47.774043] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:08.100 [2024-09-30 23:27:47.774157] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:08.669 23:27:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:08.669 23:27:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:10:08.669 23:27:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:08.669 23:27:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:08.669 23:27:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:08.669 [2024-09-30 23:27:48.355462] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:08.669 [2024-09-30 23:27:48.355512] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:08.669 [2024-09-30 23:27:48.355524] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:08.669 [2024-09-30 23:27:48.355533] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:08.669 [2024-09-30 23:27:48.355539] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:08.669 [2024-09-30 23:27:48.355550] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:08.669 [2024-09-30 23:27:48.355556] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:08.669 [2024-09-30 23:27:48.355564] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:08.669 23:27:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:08.669 23:27:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:08.669 23:27:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:08.669 23:27:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:08.669 23:27:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:08.670 23:27:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:08.670 23:27:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:08.670 23:27:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:08.670 23:27:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:08.670 23:27:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:08.670 23:27:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:08.670 23:27:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:08.670 23:27:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:08.670 23:27:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:08.670 23:27:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:08.670 23:27:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:08.670 23:27:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:08.670 "name": "Existed_Raid", 00:10:08.670 "uuid": "a8fdcfb5-cd99-4904-9e46-1042a0be6195", 00:10:08.670 "strip_size_kb": 64, 00:10:08.670 "state": "configuring", 00:10:08.670 "raid_level": "concat", 00:10:08.670 "superblock": true, 00:10:08.670 "num_base_bdevs": 4, 00:10:08.670 "num_base_bdevs_discovered": 0, 00:10:08.670 "num_base_bdevs_operational": 4, 00:10:08.670 "base_bdevs_list": [ 00:10:08.670 { 00:10:08.670 "name": "BaseBdev1", 00:10:08.670 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:08.670 "is_configured": false, 00:10:08.670 "data_offset": 0, 00:10:08.670 "data_size": 0 00:10:08.670 }, 00:10:08.670 { 00:10:08.670 "name": "BaseBdev2", 00:10:08.670 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:08.670 "is_configured": false, 00:10:08.670 "data_offset": 0, 00:10:08.670 "data_size": 0 00:10:08.670 }, 00:10:08.670 { 00:10:08.670 "name": "BaseBdev3", 00:10:08.670 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:08.670 "is_configured": false, 00:10:08.670 "data_offset": 0, 00:10:08.670 "data_size": 0 00:10:08.670 }, 00:10:08.670 { 00:10:08.670 "name": "BaseBdev4", 00:10:08.670 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:08.670 "is_configured": false, 00:10:08.670 "data_offset": 0, 00:10:08.670 "data_size": 0 00:10:08.670 } 00:10:08.670 ] 00:10:08.670 }' 00:10:08.670 23:27:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:08.670 23:27:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:08.939 23:27:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:08.939 23:27:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:08.939 23:27:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:08.939 [2024-09-30 23:27:48.734774] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:08.939 [2024-09-30 23:27:48.734818] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring 00:10:08.939 23:27:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:08.939 23:27:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:08.939 23:27:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:08.939 23:27:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:08.939 [2024-09-30 23:27:48.746796] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:08.939 [2024-09-30 23:27:48.746837] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:08.939 [2024-09-30 23:27:48.746861] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:08.939 [2024-09-30 23:27:48.746872] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:08.939 [2024-09-30 23:27:48.746887] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:08.939 [2024-09-30 23:27:48.746896] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:08.939 [2024-09-30 23:27:48.746902] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:08.939 [2024-09-30 23:27:48.746911] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:08.939 23:27:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:08.939 23:27:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:08.939 23:27:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:08.939 23:27:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:08.939 [2024-09-30 23:27:48.767542] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:08.939 BaseBdev1 00:10:08.939 23:27:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:08.939 23:27:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:10:08.939 23:27:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:10:08.939 23:27:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:08.939 23:27:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:08.939 23:27:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:08.939 23:27:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:08.939 23:27:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:08.939 23:27:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:08.939 23:27:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:08.939 23:27:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:08.940 23:27:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:08.940 23:27:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:08.940 23:27:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.214 [ 00:10:09.214 { 00:10:09.214 "name": "BaseBdev1", 00:10:09.214 "aliases": [ 00:10:09.214 "c686daf3-5911-414b-82ee-4cc734b16faf" 00:10:09.214 ], 00:10:09.214 "product_name": "Malloc disk", 00:10:09.214 "block_size": 512, 00:10:09.214 "num_blocks": 65536, 00:10:09.214 "uuid": "c686daf3-5911-414b-82ee-4cc734b16faf", 00:10:09.214 "assigned_rate_limits": { 00:10:09.214 "rw_ios_per_sec": 0, 00:10:09.214 "rw_mbytes_per_sec": 0, 00:10:09.214 "r_mbytes_per_sec": 0, 00:10:09.214 "w_mbytes_per_sec": 0 00:10:09.214 }, 00:10:09.214 "claimed": true, 00:10:09.214 "claim_type": "exclusive_write", 00:10:09.214 "zoned": false, 00:10:09.214 "supported_io_types": { 00:10:09.214 "read": true, 00:10:09.214 "write": true, 00:10:09.214 "unmap": true, 00:10:09.214 "flush": true, 00:10:09.214 "reset": true, 00:10:09.214 "nvme_admin": false, 00:10:09.214 "nvme_io": false, 00:10:09.214 "nvme_io_md": false, 00:10:09.214 "write_zeroes": true, 00:10:09.214 "zcopy": true, 00:10:09.214 "get_zone_info": false, 00:10:09.214 "zone_management": false, 00:10:09.214 "zone_append": false, 00:10:09.214 "compare": false, 00:10:09.214 "compare_and_write": false, 00:10:09.214 "abort": true, 00:10:09.214 "seek_hole": false, 00:10:09.214 "seek_data": false, 00:10:09.214 "copy": true, 00:10:09.214 "nvme_iov_md": false 00:10:09.214 }, 00:10:09.214 "memory_domains": [ 00:10:09.214 { 00:10:09.214 "dma_device_id": "system", 00:10:09.214 "dma_device_type": 1 00:10:09.214 }, 00:10:09.214 { 00:10:09.214 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:09.214 "dma_device_type": 2 00:10:09.214 } 00:10:09.214 ], 00:10:09.214 "driver_specific": {} 00:10:09.214 } 00:10:09.214 ] 00:10:09.214 23:27:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:09.214 23:27:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:09.214 23:27:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:09.214 23:27:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:09.214 23:27:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:09.214 23:27:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:09.214 23:27:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:09.214 23:27:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:09.214 23:27:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:09.214 23:27:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:09.214 23:27:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:09.214 23:27:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:09.214 23:27:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:09.214 23:27:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:09.214 23:27:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:09.214 23:27:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.214 23:27:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:09.214 23:27:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:09.214 "name": "Existed_Raid", 00:10:09.214 "uuid": "0e1e9cf5-0118-49f6-bd12-9349b58a2dd0", 00:10:09.214 "strip_size_kb": 64, 00:10:09.214 "state": "configuring", 00:10:09.214 "raid_level": "concat", 00:10:09.214 "superblock": true, 00:10:09.214 "num_base_bdevs": 4, 00:10:09.214 "num_base_bdevs_discovered": 1, 00:10:09.214 "num_base_bdevs_operational": 4, 00:10:09.214 "base_bdevs_list": [ 00:10:09.214 { 00:10:09.214 "name": "BaseBdev1", 00:10:09.214 "uuid": "c686daf3-5911-414b-82ee-4cc734b16faf", 00:10:09.214 "is_configured": true, 00:10:09.214 "data_offset": 2048, 00:10:09.214 "data_size": 63488 00:10:09.214 }, 00:10:09.214 { 00:10:09.214 "name": "BaseBdev2", 00:10:09.214 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:09.214 "is_configured": false, 00:10:09.214 "data_offset": 0, 00:10:09.214 "data_size": 0 00:10:09.214 }, 00:10:09.214 { 00:10:09.214 "name": "BaseBdev3", 00:10:09.214 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:09.214 "is_configured": false, 00:10:09.214 "data_offset": 0, 00:10:09.214 "data_size": 0 00:10:09.214 }, 00:10:09.214 { 00:10:09.214 "name": "BaseBdev4", 00:10:09.214 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:09.214 "is_configured": false, 00:10:09.214 "data_offset": 0, 00:10:09.214 "data_size": 0 00:10:09.214 } 00:10:09.214 ] 00:10:09.214 }' 00:10:09.214 23:27:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:09.214 23:27:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.473 23:27:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:09.473 23:27:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:09.473 23:27:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.473 [2024-09-30 23:27:49.226806] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:09.473 [2024-09-30 23:27:49.226855] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring 00:10:09.473 23:27:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:09.473 23:27:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:09.473 23:27:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:09.473 23:27:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.473 [2024-09-30 23:27:49.238897] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:09.473 [2024-09-30 23:27:49.240710] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:09.473 [2024-09-30 23:27:49.240753] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:09.473 [2024-09-30 23:27:49.240762] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:09.473 [2024-09-30 23:27:49.240771] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:09.473 [2024-09-30 23:27:49.240777] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:09.473 [2024-09-30 23:27:49.240784] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:09.473 23:27:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:09.473 23:27:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:10:09.473 23:27:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:09.473 23:27:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:09.473 23:27:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:09.473 23:27:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:09.473 23:27:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:09.473 23:27:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:09.473 23:27:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:09.473 23:27:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:09.473 23:27:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:09.473 23:27:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:09.473 23:27:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:09.473 23:27:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:09.473 23:27:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:09.473 23:27:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:09.473 23:27:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.473 23:27:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:09.473 23:27:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:09.473 "name": "Existed_Raid", 00:10:09.473 "uuid": "cd1b8c35-e3df-4bd6-b9dc-a39ebb45a260", 00:10:09.473 "strip_size_kb": 64, 00:10:09.473 "state": "configuring", 00:10:09.473 "raid_level": "concat", 00:10:09.473 "superblock": true, 00:10:09.473 "num_base_bdevs": 4, 00:10:09.473 "num_base_bdevs_discovered": 1, 00:10:09.473 "num_base_bdevs_operational": 4, 00:10:09.473 "base_bdevs_list": [ 00:10:09.473 { 00:10:09.473 "name": "BaseBdev1", 00:10:09.473 "uuid": "c686daf3-5911-414b-82ee-4cc734b16faf", 00:10:09.473 "is_configured": true, 00:10:09.473 "data_offset": 2048, 00:10:09.473 "data_size": 63488 00:10:09.473 }, 00:10:09.473 { 00:10:09.473 "name": "BaseBdev2", 00:10:09.473 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:09.473 "is_configured": false, 00:10:09.473 "data_offset": 0, 00:10:09.473 "data_size": 0 00:10:09.473 }, 00:10:09.473 { 00:10:09.473 "name": "BaseBdev3", 00:10:09.473 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:09.473 "is_configured": false, 00:10:09.473 "data_offset": 0, 00:10:09.473 "data_size": 0 00:10:09.473 }, 00:10:09.473 { 00:10:09.473 "name": "BaseBdev4", 00:10:09.473 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:09.473 "is_configured": false, 00:10:09.473 "data_offset": 0, 00:10:09.473 "data_size": 0 00:10:09.473 } 00:10:09.473 ] 00:10:09.473 }' 00:10:09.473 23:27:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:09.473 23:27:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.041 23:27:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:10.041 23:27:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:10.041 23:27:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.041 [2024-09-30 23:27:49.684473] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:10.041 BaseBdev2 00:10:10.041 23:27:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:10.041 23:27:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:10:10.041 23:27:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:10:10.041 23:27:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:10.041 23:27:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:10.041 23:27:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:10.041 23:27:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:10.041 23:27:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:10.041 23:27:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:10.041 23:27:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.041 23:27:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:10.041 23:27:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:10.041 23:27:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:10.041 23:27:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.041 [ 00:10:10.041 { 00:10:10.041 "name": "BaseBdev2", 00:10:10.041 "aliases": [ 00:10:10.041 "1a574be1-7219-4567-b7bc-cbd87d945160" 00:10:10.041 ], 00:10:10.041 "product_name": "Malloc disk", 00:10:10.041 "block_size": 512, 00:10:10.041 "num_blocks": 65536, 00:10:10.041 "uuid": "1a574be1-7219-4567-b7bc-cbd87d945160", 00:10:10.041 "assigned_rate_limits": { 00:10:10.041 "rw_ios_per_sec": 0, 00:10:10.041 "rw_mbytes_per_sec": 0, 00:10:10.041 "r_mbytes_per_sec": 0, 00:10:10.041 "w_mbytes_per_sec": 0 00:10:10.041 }, 00:10:10.041 "claimed": true, 00:10:10.041 "claim_type": "exclusive_write", 00:10:10.041 "zoned": false, 00:10:10.041 "supported_io_types": { 00:10:10.041 "read": true, 00:10:10.041 "write": true, 00:10:10.041 "unmap": true, 00:10:10.041 "flush": true, 00:10:10.041 "reset": true, 00:10:10.041 "nvme_admin": false, 00:10:10.041 "nvme_io": false, 00:10:10.041 "nvme_io_md": false, 00:10:10.041 "write_zeroes": true, 00:10:10.041 "zcopy": true, 00:10:10.041 "get_zone_info": false, 00:10:10.041 "zone_management": false, 00:10:10.041 "zone_append": false, 00:10:10.041 "compare": false, 00:10:10.041 "compare_and_write": false, 00:10:10.041 "abort": true, 00:10:10.041 "seek_hole": false, 00:10:10.041 "seek_data": false, 00:10:10.041 "copy": true, 00:10:10.041 "nvme_iov_md": false 00:10:10.041 }, 00:10:10.041 "memory_domains": [ 00:10:10.041 { 00:10:10.041 "dma_device_id": "system", 00:10:10.041 "dma_device_type": 1 00:10:10.041 }, 00:10:10.041 { 00:10:10.041 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:10.041 "dma_device_type": 2 00:10:10.041 } 00:10:10.041 ], 00:10:10.041 "driver_specific": {} 00:10:10.041 } 00:10:10.041 ] 00:10:10.041 23:27:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:10.041 23:27:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:10.041 23:27:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:10.041 23:27:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:10.041 23:27:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:10.041 23:27:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:10.041 23:27:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:10.042 23:27:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:10.042 23:27:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:10.042 23:27:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:10.042 23:27:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:10.042 23:27:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:10.042 23:27:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:10.042 23:27:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:10.042 23:27:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:10.042 23:27:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:10.042 23:27:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:10.042 23:27:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.042 23:27:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:10.042 23:27:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:10.042 "name": "Existed_Raid", 00:10:10.042 "uuid": "cd1b8c35-e3df-4bd6-b9dc-a39ebb45a260", 00:10:10.042 "strip_size_kb": 64, 00:10:10.042 "state": "configuring", 00:10:10.042 "raid_level": "concat", 00:10:10.042 "superblock": true, 00:10:10.042 "num_base_bdevs": 4, 00:10:10.042 "num_base_bdevs_discovered": 2, 00:10:10.042 "num_base_bdevs_operational": 4, 00:10:10.042 "base_bdevs_list": [ 00:10:10.042 { 00:10:10.042 "name": "BaseBdev1", 00:10:10.042 "uuid": "c686daf3-5911-414b-82ee-4cc734b16faf", 00:10:10.042 "is_configured": true, 00:10:10.042 "data_offset": 2048, 00:10:10.042 "data_size": 63488 00:10:10.042 }, 00:10:10.042 { 00:10:10.042 "name": "BaseBdev2", 00:10:10.042 "uuid": "1a574be1-7219-4567-b7bc-cbd87d945160", 00:10:10.042 "is_configured": true, 00:10:10.042 "data_offset": 2048, 00:10:10.042 "data_size": 63488 00:10:10.042 }, 00:10:10.042 { 00:10:10.042 "name": "BaseBdev3", 00:10:10.042 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:10.042 "is_configured": false, 00:10:10.042 "data_offset": 0, 00:10:10.042 "data_size": 0 00:10:10.042 }, 00:10:10.042 { 00:10:10.042 "name": "BaseBdev4", 00:10:10.042 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:10.042 "is_configured": false, 00:10:10.042 "data_offset": 0, 00:10:10.042 "data_size": 0 00:10:10.042 } 00:10:10.042 ] 00:10:10.042 }' 00:10:10.042 23:27:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:10.042 23:27:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.300 23:27:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:10.301 23:27:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:10.301 23:27:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.301 [2024-09-30 23:27:50.126612] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:10.301 BaseBdev3 00:10:10.301 23:27:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:10.301 23:27:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:10:10.301 23:27:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:10:10.301 23:27:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:10.301 23:27:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:10.301 23:27:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:10.301 23:27:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:10.301 23:27:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:10.301 23:27:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:10.301 23:27:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.301 23:27:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:10.301 23:27:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:10.301 23:27:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:10.301 23:27:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.560 [ 00:10:10.560 { 00:10:10.560 "name": "BaseBdev3", 00:10:10.560 "aliases": [ 00:10:10.560 "39973c03-d211-485e-a07c-80c96383238b" 00:10:10.560 ], 00:10:10.560 "product_name": "Malloc disk", 00:10:10.560 "block_size": 512, 00:10:10.560 "num_blocks": 65536, 00:10:10.560 "uuid": "39973c03-d211-485e-a07c-80c96383238b", 00:10:10.560 "assigned_rate_limits": { 00:10:10.560 "rw_ios_per_sec": 0, 00:10:10.560 "rw_mbytes_per_sec": 0, 00:10:10.560 "r_mbytes_per_sec": 0, 00:10:10.560 "w_mbytes_per_sec": 0 00:10:10.560 }, 00:10:10.560 "claimed": true, 00:10:10.560 "claim_type": "exclusive_write", 00:10:10.560 "zoned": false, 00:10:10.560 "supported_io_types": { 00:10:10.560 "read": true, 00:10:10.560 "write": true, 00:10:10.560 "unmap": true, 00:10:10.560 "flush": true, 00:10:10.560 "reset": true, 00:10:10.560 "nvme_admin": false, 00:10:10.560 "nvme_io": false, 00:10:10.560 "nvme_io_md": false, 00:10:10.560 "write_zeroes": true, 00:10:10.560 "zcopy": true, 00:10:10.560 "get_zone_info": false, 00:10:10.560 "zone_management": false, 00:10:10.560 "zone_append": false, 00:10:10.560 "compare": false, 00:10:10.560 "compare_and_write": false, 00:10:10.560 "abort": true, 00:10:10.560 "seek_hole": false, 00:10:10.560 "seek_data": false, 00:10:10.560 "copy": true, 00:10:10.560 "nvme_iov_md": false 00:10:10.560 }, 00:10:10.560 "memory_domains": [ 00:10:10.560 { 00:10:10.560 "dma_device_id": "system", 00:10:10.560 "dma_device_type": 1 00:10:10.560 }, 00:10:10.560 { 00:10:10.560 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:10.560 "dma_device_type": 2 00:10:10.560 } 00:10:10.560 ], 00:10:10.560 "driver_specific": {} 00:10:10.560 } 00:10:10.560 ] 00:10:10.561 23:27:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:10.561 23:27:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:10.561 23:27:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:10.561 23:27:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:10.561 23:27:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:10.561 23:27:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:10.561 23:27:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:10.561 23:27:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:10.561 23:27:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:10.561 23:27:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:10.561 23:27:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:10.561 23:27:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:10.561 23:27:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:10.561 23:27:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:10.561 23:27:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:10.561 23:27:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:10.561 23:27:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:10.561 23:27:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.561 23:27:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:10.561 23:27:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:10.561 "name": "Existed_Raid", 00:10:10.561 "uuid": "cd1b8c35-e3df-4bd6-b9dc-a39ebb45a260", 00:10:10.561 "strip_size_kb": 64, 00:10:10.561 "state": "configuring", 00:10:10.561 "raid_level": "concat", 00:10:10.561 "superblock": true, 00:10:10.561 "num_base_bdevs": 4, 00:10:10.561 "num_base_bdevs_discovered": 3, 00:10:10.561 "num_base_bdevs_operational": 4, 00:10:10.561 "base_bdevs_list": [ 00:10:10.561 { 00:10:10.561 "name": "BaseBdev1", 00:10:10.561 "uuid": "c686daf3-5911-414b-82ee-4cc734b16faf", 00:10:10.561 "is_configured": true, 00:10:10.561 "data_offset": 2048, 00:10:10.561 "data_size": 63488 00:10:10.561 }, 00:10:10.561 { 00:10:10.561 "name": "BaseBdev2", 00:10:10.561 "uuid": "1a574be1-7219-4567-b7bc-cbd87d945160", 00:10:10.561 "is_configured": true, 00:10:10.561 "data_offset": 2048, 00:10:10.561 "data_size": 63488 00:10:10.561 }, 00:10:10.561 { 00:10:10.561 "name": "BaseBdev3", 00:10:10.561 "uuid": "39973c03-d211-485e-a07c-80c96383238b", 00:10:10.561 "is_configured": true, 00:10:10.561 "data_offset": 2048, 00:10:10.561 "data_size": 63488 00:10:10.561 }, 00:10:10.561 { 00:10:10.561 "name": "BaseBdev4", 00:10:10.561 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:10.561 "is_configured": false, 00:10:10.561 "data_offset": 0, 00:10:10.561 "data_size": 0 00:10:10.561 } 00:10:10.561 ] 00:10:10.561 }' 00:10:10.561 23:27:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:10.561 23:27:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.821 23:27:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:10.821 23:27:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:10.821 23:27:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.821 [2024-09-30 23:27:50.620830] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:10.821 [2024-09-30 23:27:50.621065] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:10:10.821 [2024-09-30 23:27:50.621088] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:10.821 BaseBdev4 00:10:10.821 [2024-09-30 23:27:50.621378] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:10:10.821 [2024-09-30 23:27:50.621518] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:10:10.821 [2024-09-30 23:27:50.621530] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980 00:10:10.821 [2024-09-30 23:27:50.621632] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:10.821 23:27:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:10.821 23:27:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:10:10.821 23:27:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:10:10.821 23:27:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:10.821 23:27:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:10.821 23:27:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:10.821 23:27:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:10.821 23:27:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:10.821 23:27:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:10.821 23:27:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.821 23:27:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:10.821 23:27:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:10.821 23:27:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:10.821 23:27:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.821 [ 00:10:10.821 { 00:10:10.821 "name": "BaseBdev4", 00:10:10.821 "aliases": [ 00:10:10.821 "2968a60b-80bd-48d6-af48-252cba2f62c5" 00:10:10.821 ], 00:10:10.821 "product_name": "Malloc disk", 00:10:10.821 "block_size": 512, 00:10:10.821 "num_blocks": 65536, 00:10:10.821 "uuid": "2968a60b-80bd-48d6-af48-252cba2f62c5", 00:10:10.821 "assigned_rate_limits": { 00:10:10.821 "rw_ios_per_sec": 0, 00:10:10.821 "rw_mbytes_per_sec": 0, 00:10:10.821 "r_mbytes_per_sec": 0, 00:10:10.821 "w_mbytes_per_sec": 0 00:10:10.821 }, 00:10:10.821 "claimed": true, 00:10:10.821 "claim_type": "exclusive_write", 00:10:10.821 "zoned": false, 00:10:10.821 "supported_io_types": { 00:10:10.821 "read": true, 00:10:10.821 "write": true, 00:10:10.821 "unmap": true, 00:10:10.821 "flush": true, 00:10:10.821 "reset": true, 00:10:10.821 "nvme_admin": false, 00:10:10.821 "nvme_io": false, 00:10:10.821 "nvme_io_md": false, 00:10:10.821 "write_zeroes": true, 00:10:10.821 "zcopy": true, 00:10:10.821 "get_zone_info": false, 00:10:10.821 "zone_management": false, 00:10:10.821 "zone_append": false, 00:10:10.821 "compare": false, 00:10:10.821 "compare_and_write": false, 00:10:10.821 "abort": true, 00:10:10.821 "seek_hole": false, 00:10:10.821 "seek_data": false, 00:10:10.821 "copy": true, 00:10:10.821 "nvme_iov_md": false 00:10:10.821 }, 00:10:10.821 "memory_domains": [ 00:10:10.821 { 00:10:10.821 "dma_device_id": "system", 00:10:10.821 "dma_device_type": 1 00:10:10.821 }, 00:10:10.821 { 00:10:10.821 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:10.821 "dma_device_type": 2 00:10:10.821 } 00:10:10.821 ], 00:10:10.821 "driver_specific": {} 00:10:10.821 } 00:10:10.821 ] 00:10:10.821 23:27:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:10.821 23:27:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:10.821 23:27:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:10.821 23:27:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:10.821 23:27:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:10:10.821 23:27:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:10.821 23:27:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:10.821 23:27:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:10.821 23:27:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:10.821 23:27:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:10.822 23:27:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:10.822 23:27:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:10.822 23:27:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:10.822 23:27:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:10.822 23:27:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:10.822 23:27:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:10.822 23:27:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:10.822 23:27:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:11.080 23:27:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:11.080 23:27:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:11.080 "name": "Existed_Raid", 00:10:11.080 "uuid": "cd1b8c35-e3df-4bd6-b9dc-a39ebb45a260", 00:10:11.080 "strip_size_kb": 64, 00:10:11.080 "state": "online", 00:10:11.080 "raid_level": "concat", 00:10:11.080 "superblock": true, 00:10:11.080 "num_base_bdevs": 4, 00:10:11.080 "num_base_bdevs_discovered": 4, 00:10:11.080 "num_base_bdevs_operational": 4, 00:10:11.080 "base_bdevs_list": [ 00:10:11.080 { 00:10:11.080 "name": "BaseBdev1", 00:10:11.080 "uuid": "c686daf3-5911-414b-82ee-4cc734b16faf", 00:10:11.080 "is_configured": true, 00:10:11.081 "data_offset": 2048, 00:10:11.081 "data_size": 63488 00:10:11.081 }, 00:10:11.081 { 00:10:11.081 "name": "BaseBdev2", 00:10:11.081 "uuid": "1a574be1-7219-4567-b7bc-cbd87d945160", 00:10:11.081 "is_configured": true, 00:10:11.081 "data_offset": 2048, 00:10:11.081 "data_size": 63488 00:10:11.081 }, 00:10:11.081 { 00:10:11.081 "name": "BaseBdev3", 00:10:11.081 "uuid": "39973c03-d211-485e-a07c-80c96383238b", 00:10:11.081 "is_configured": true, 00:10:11.081 "data_offset": 2048, 00:10:11.081 "data_size": 63488 00:10:11.081 }, 00:10:11.081 { 00:10:11.081 "name": "BaseBdev4", 00:10:11.081 "uuid": "2968a60b-80bd-48d6-af48-252cba2f62c5", 00:10:11.081 "is_configured": true, 00:10:11.081 "data_offset": 2048, 00:10:11.081 "data_size": 63488 00:10:11.081 } 00:10:11.081 ] 00:10:11.081 }' 00:10:11.081 23:27:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:11.081 23:27:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:11.340 23:27:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:10:11.340 23:27:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:11.340 23:27:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:11.340 23:27:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:11.340 23:27:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:10:11.340 23:27:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:11.340 23:27:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:11.340 23:27:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:11.340 23:27:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:11.340 23:27:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:11.340 [2024-09-30 23:27:51.076380] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:11.340 23:27:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:11.340 23:27:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:11.340 "name": "Existed_Raid", 00:10:11.340 "aliases": [ 00:10:11.340 "cd1b8c35-e3df-4bd6-b9dc-a39ebb45a260" 00:10:11.340 ], 00:10:11.340 "product_name": "Raid Volume", 00:10:11.340 "block_size": 512, 00:10:11.340 "num_blocks": 253952, 00:10:11.340 "uuid": "cd1b8c35-e3df-4bd6-b9dc-a39ebb45a260", 00:10:11.340 "assigned_rate_limits": { 00:10:11.340 "rw_ios_per_sec": 0, 00:10:11.340 "rw_mbytes_per_sec": 0, 00:10:11.340 "r_mbytes_per_sec": 0, 00:10:11.340 "w_mbytes_per_sec": 0 00:10:11.340 }, 00:10:11.340 "claimed": false, 00:10:11.340 "zoned": false, 00:10:11.340 "supported_io_types": { 00:10:11.340 "read": true, 00:10:11.340 "write": true, 00:10:11.340 "unmap": true, 00:10:11.340 "flush": true, 00:10:11.340 "reset": true, 00:10:11.340 "nvme_admin": false, 00:10:11.340 "nvme_io": false, 00:10:11.340 "nvme_io_md": false, 00:10:11.340 "write_zeroes": true, 00:10:11.340 "zcopy": false, 00:10:11.340 "get_zone_info": false, 00:10:11.340 "zone_management": false, 00:10:11.340 "zone_append": false, 00:10:11.340 "compare": false, 00:10:11.340 "compare_and_write": false, 00:10:11.340 "abort": false, 00:10:11.340 "seek_hole": false, 00:10:11.340 "seek_data": false, 00:10:11.340 "copy": false, 00:10:11.340 "nvme_iov_md": false 00:10:11.340 }, 00:10:11.340 "memory_domains": [ 00:10:11.340 { 00:10:11.340 "dma_device_id": "system", 00:10:11.340 "dma_device_type": 1 00:10:11.340 }, 00:10:11.340 { 00:10:11.340 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:11.340 "dma_device_type": 2 00:10:11.340 }, 00:10:11.340 { 00:10:11.340 "dma_device_id": "system", 00:10:11.340 "dma_device_type": 1 00:10:11.340 }, 00:10:11.340 { 00:10:11.340 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:11.340 "dma_device_type": 2 00:10:11.340 }, 00:10:11.340 { 00:10:11.340 "dma_device_id": "system", 00:10:11.340 "dma_device_type": 1 00:10:11.340 }, 00:10:11.340 { 00:10:11.340 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:11.340 "dma_device_type": 2 00:10:11.340 }, 00:10:11.340 { 00:10:11.340 "dma_device_id": "system", 00:10:11.340 "dma_device_type": 1 00:10:11.340 }, 00:10:11.340 { 00:10:11.340 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:11.340 "dma_device_type": 2 00:10:11.340 } 00:10:11.340 ], 00:10:11.340 "driver_specific": { 00:10:11.340 "raid": { 00:10:11.340 "uuid": "cd1b8c35-e3df-4bd6-b9dc-a39ebb45a260", 00:10:11.340 "strip_size_kb": 64, 00:10:11.340 "state": "online", 00:10:11.340 "raid_level": "concat", 00:10:11.340 "superblock": true, 00:10:11.340 "num_base_bdevs": 4, 00:10:11.340 "num_base_bdevs_discovered": 4, 00:10:11.340 "num_base_bdevs_operational": 4, 00:10:11.340 "base_bdevs_list": [ 00:10:11.340 { 00:10:11.340 "name": "BaseBdev1", 00:10:11.340 "uuid": "c686daf3-5911-414b-82ee-4cc734b16faf", 00:10:11.340 "is_configured": true, 00:10:11.340 "data_offset": 2048, 00:10:11.340 "data_size": 63488 00:10:11.340 }, 00:10:11.340 { 00:10:11.340 "name": "BaseBdev2", 00:10:11.340 "uuid": "1a574be1-7219-4567-b7bc-cbd87d945160", 00:10:11.340 "is_configured": true, 00:10:11.340 "data_offset": 2048, 00:10:11.340 "data_size": 63488 00:10:11.340 }, 00:10:11.340 { 00:10:11.340 "name": "BaseBdev3", 00:10:11.340 "uuid": "39973c03-d211-485e-a07c-80c96383238b", 00:10:11.340 "is_configured": true, 00:10:11.340 "data_offset": 2048, 00:10:11.340 "data_size": 63488 00:10:11.340 }, 00:10:11.340 { 00:10:11.340 "name": "BaseBdev4", 00:10:11.340 "uuid": "2968a60b-80bd-48d6-af48-252cba2f62c5", 00:10:11.340 "is_configured": true, 00:10:11.340 "data_offset": 2048, 00:10:11.340 "data_size": 63488 00:10:11.340 } 00:10:11.340 ] 00:10:11.340 } 00:10:11.340 } 00:10:11.340 }' 00:10:11.340 23:27:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:11.340 23:27:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:10:11.340 BaseBdev2 00:10:11.340 BaseBdev3 00:10:11.340 BaseBdev4' 00:10:11.340 23:27:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:11.600 23:27:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:11.600 23:27:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:11.600 23:27:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:10:11.600 23:27:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:11.600 23:27:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:11.600 23:27:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:11.600 23:27:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:11.600 23:27:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:11.600 23:27:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:11.600 23:27:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:11.600 23:27:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:11.600 23:27:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:11.600 23:27:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:11.600 23:27:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:11.600 23:27:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:11.600 23:27:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:11.600 23:27:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:11.600 23:27:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:11.600 23:27:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:11.600 23:27:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:11.600 23:27:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:11.600 23:27:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:11.600 23:27:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:11.600 23:27:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:11.600 23:27:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:11.600 23:27:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:11.600 23:27:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:11.600 23:27:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:11.600 23:27:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:11.600 23:27:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:11.600 23:27:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:11.600 23:27:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:11.600 23:27:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:11.600 23:27:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:11.600 23:27:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:11.600 23:27:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:11.600 [2024-09-30 23:27:51.371604] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:11.600 [2024-09-30 23:27:51.371637] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:11.600 [2024-09-30 23:27:51.371682] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:11.600 23:27:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:11.600 23:27:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:10:11.600 23:27:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:10:11.600 23:27:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:11.600 23:27:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:10:11.600 23:27:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:10:11.600 23:27:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:10:11.600 23:27:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:11.600 23:27:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:10:11.600 23:27:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:11.600 23:27:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:11.600 23:27:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:11.600 23:27:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:11.600 23:27:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:11.600 23:27:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:11.600 23:27:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:11.600 23:27:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:11.601 23:27:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:11.601 23:27:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:11.601 23:27:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:11.601 23:27:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:11.601 23:27:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:11.601 "name": "Existed_Raid", 00:10:11.601 "uuid": "cd1b8c35-e3df-4bd6-b9dc-a39ebb45a260", 00:10:11.601 "strip_size_kb": 64, 00:10:11.601 "state": "offline", 00:10:11.601 "raid_level": "concat", 00:10:11.601 "superblock": true, 00:10:11.601 "num_base_bdevs": 4, 00:10:11.601 "num_base_bdevs_discovered": 3, 00:10:11.601 "num_base_bdevs_operational": 3, 00:10:11.601 "base_bdevs_list": [ 00:10:11.601 { 00:10:11.601 "name": null, 00:10:11.601 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:11.601 "is_configured": false, 00:10:11.601 "data_offset": 0, 00:10:11.601 "data_size": 63488 00:10:11.601 }, 00:10:11.601 { 00:10:11.601 "name": "BaseBdev2", 00:10:11.601 "uuid": "1a574be1-7219-4567-b7bc-cbd87d945160", 00:10:11.601 "is_configured": true, 00:10:11.601 "data_offset": 2048, 00:10:11.601 "data_size": 63488 00:10:11.601 }, 00:10:11.601 { 00:10:11.601 "name": "BaseBdev3", 00:10:11.601 "uuid": "39973c03-d211-485e-a07c-80c96383238b", 00:10:11.601 "is_configured": true, 00:10:11.601 "data_offset": 2048, 00:10:11.601 "data_size": 63488 00:10:11.601 }, 00:10:11.601 { 00:10:11.601 "name": "BaseBdev4", 00:10:11.601 "uuid": "2968a60b-80bd-48d6-af48-252cba2f62c5", 00:10:11.601 "is_configured": true, 00:10:11.601 "data_offset": 2048, 00:10:11.601 "data_size": 63488 00:10:11.601 } 00:10:11.601 ] 00:10:11.601 }' 00:10:11.601 23:27:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:11.601 23:27:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:12.169 23:27:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:10:12.169 23:27:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:12.169 23:27:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:12.169 23:27:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:12.169 23:27:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:12.169 23:27:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:12.169 23:27:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:12.169 23:27:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:12.169 23:27:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:12.169 23:27:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:10:12.169 23:27:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:12.169 23:27:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:12.169 [2024-09-30 23:27:51.906026] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:12.169 23:27:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:12.170 23:27:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:12.170 23:27:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:12.170 23:27:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:12.170 23:27:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:12.170 23:27:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:12.170 23:27:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:12.170 23:27:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:12.170 23:27:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:12.170 23:27:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:12.170 23:27:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:10:12.170 23:27:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:12.170 23:27:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:12.170 [2024-09-30 23:27:51.961202] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:12.170 23:27:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:12.170 23:27:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:12.170 23:27:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:12.170 23:27:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:12.170 23:27:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:12.170 23:27:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:12.170 23:27:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:12.170 23:27:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:12.429 23:27:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:12.429 23:27:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:12.429 23:27:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:10:12.429 23:27:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:12.429 23:27:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:12.429 [2024-09-30 23:27:52.028014] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:10:12.429 [2024-09-30 23:27:52.028061] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline 00:10:12.429 23:27:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:12.429 23:27:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:12.429 23:27:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:12.429 23:27:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:10:12.429 23:27:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:12.429 23:27:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:12.429 23:27:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:12.429 23:27:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:12.429 23:27:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:10:12.429 23:27:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:10:12.429 23:27:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:10:12.429 23:27:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:10:12.429 23:27:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:12.429 23:27:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:12.429 23:27:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:12.429 23:27:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:12.429 BaseBdev2 00:10:12.429 23:27:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:12.429 23:27:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:10:12.429 23:27:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:10:12.429 23:27:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:12.429 23:27:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:12.429 23:27:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:12.429 23:27:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:12.429 23:27:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:12.429 23:27:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:12.429 23:27:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:12.429 23:27:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:12.429 23:27:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:12.429 23:27:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:12.429 23:27:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:12.429 [ 00:10:12.429 { 00:10:12.429 "name": "BaseBdev2", 00:10:12.429 "aliases": [ 00:10:12.429 "6d8d04a3-84a3-4dff-b050-ce222248d942" 00:10:12.429 ], 00:10:12.429 "product_name": "Malloc disk", 00:10:12.429 "block_size": 512, 00:10:12.429 "num_blocks": 65536, 00:10:12.429 "uuid": "6d8d04a3-84a3-4dff-b050-ce222248d942", 00:10:12.429 "assigned_rate_limits": { 00:10:12.429 "rw_ios_per_sec": 0, 00:10:12.429 "rw_mbytes_per_sec": 0, 00:10:12.429 "r_mbytes_per_sec": 0, 00:10:12.429 "w_mbytes_per_sec": 0 00:10:12.429 }, 00:10:12.429 "claimed": false, 00:10:12.429 "zoned": false, 00:10:12.429 "supported_io_types": { 00:10:12.429 "read": true, 00:10:12.429 "write": true, 00:10:12.429 "unmap": true, 00:10:12.429 "flush": true, 00:10:12.429 "reset": true, 00:10:12.429 "nvme_admin": false, 00:10:12.429 "nvme_io": false, 00:10:12.429 "nvme_io_md": false, 00:10:12.429 "write_zeroes": true, 00:10:12.429 "zcopy": true, 00:10:12.429 "get_zone_info": false, 00:10:12.429 "zone_management": false, 00:10:12.429 "zone_append": false, 00:10:12.429 "compare": false, 00:10:12.429 "compare_and_write": false, 00:10:12.429 "abort": true, 00:10:12.429 "seek_hole": false, 00:10:12.429 "seek_data": false, 00:10:12.429 "copy": true, 00:10:12.429 "nvme_iov_md": false 00:10:12.429 }, 00:10:12.429 "memory_domains": [ 00:10:12.429 { 00:10:12.429 "dma_device_id": "system", 00:10:12.429 "dma_device_type": 1 00:10:12.429 }, 00:10:12.429 { 00:10:12.429 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:12.429 "dma_device_type": 2 00:10:12.429 } 00:10:12.429 ], 00:10:12.429 "driver_specific": {} 00:10:12.429 } 00:10:12.429 ] 00:10:12.429 23:27:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:12.429 23:27:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:12.429 23:27:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:12.429 23:27:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:12.429 23:27:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:12.429 23:27:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:12.429 23:27:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:12.429 BaseBdev3 00:10:12.429 23:27:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:12.429 23:27:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:10:12.429 23:27:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:10:12.429 23:27:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:12.429 23:27:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:12.429 23:27:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:12.429 23:27:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:12.429 23:27:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:12.429 23:27:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:12.429 23:27:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:12.429 23:27:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:12.430 23:27:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:12.430 23:27:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:12.430 23:27:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:12.430 [ 00:10:12.430 { 00:10:12.430 "name": "BaseBdev3", 00:10:12.430 "aliases": [ 00:10:12.430 "a4433538-c9f3-437a-ad02-51bff6124345" 00:10:12.430 ], 00:10:12.430 "product_name": "Malloc disk", 00:10:12.430 "block_size": 512, 00:10:12.430 "num_blocks": 65536, 00:10:12.430 "uuid": "a4433538-c9f3-437a-ad02-51bff6124345", 00:10:12.430 "assigned_rate_limits": { 00:10:12.430 "rw_ios_per_sec": 0, 00:10:12.430 "rw_mbytes_per_sec": 0, 00:10:12.430 "r_mbytes_per_sec": 0, 00:10:12.430 "w_mbytes_per_sec": 0 00:10:12.430 }, 00:10:12.430 "claimed": false, 00:10:12.430 "zoned": false, 00:10:12.430 "supported_io_types": { 00:10:12.430 "read": true, 00:10:12.430 "write": true, 00:10:12.430 "unmap": true, 00:10:12.430 "flush": true, 00:10:12.430 "reset": true, 00:10:12.430 "nvme_admin": false, 00:10:12.430 "nvme_io": false, 00:10:12.430 "nvme_io_md": false, 00:10:12.430 "write_zeroes": true, 00:10:12.430 "zcopy": true, 00:10:12.430 "get_zone_info": false, 00:10:12.430 "zone_management": false, 00:10:12.430 "zone_append": false, 00:10:12.430 "compare": false, 00:10:12.430 "compare_and_write": false, 00:10:12.430 "abort": true, 00:10:12.430 "seek_hole": false, 00:10:12.430 "seek_data": false, 00:10:12.430 "copy": true, 00:10:12.430 "nvme_iov_md": false 00:10:12.430 }, 00:10:12.430 "memory_domains": [ 00:10:12.430 { 00:10:12.430 "dma_device_id": "system", 00:10:12.430 "dma_device_type": 1 00:10:12.430 }, 00:10:12.430 { 00:10:12.430 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:12.430 "dma_device_type": 2 00:10:12.430 } 00:10:12.430 ], 00:10:12.430 "driver_specific": {} 00:10:12.430 } 00:10:12.430 ] 00:10:12.430 23:27:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:12.430 23:27:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:12.430 23:27:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:12.430 23:27:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:12.430 23:27:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:12.430 23:27:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:12.430 23:27:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:12.430 BaseBdev4 00:10:12.430 23:27:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:12.430 23:27:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:10:12.430 23:27:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:10:12.430 23:27:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:12.430 23:27:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:12.430 23:27:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:12.430 23:27:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:12.430 23:27:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:12.430 23:27:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:12.430 23:27:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:12.430 23:27:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:12.430 23:27:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:12.430 23:27:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:12.430 23:27:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:12.430 [ 00:10:12.430 { 00:10:12.430 "name": "BaseBdev4", 00:10:12.430 "aliases": [ 00:10:12.430 "406d89f0-3c1e-4709-b2c2-a5fa56c1b2dd" 00:10:12.430 ], 00:10:12.430 "product_name": "Malloc disk", 00:10:12.430 "block_size": 512, 00:10:12.430 "num_blocks": 65536, 00:10:12.430 "uuid": "406d89f0-3c1e-4709-b2c2-a5fa56c1b2dd", 00:10:12.430 "assigned_rate_limits": { 00:10:12.430 "rw_ios_per_sec": 0, 00:10:12.430 "rw_mbytes_per_sec": 0, 00:10:12.430 "r_mbytes_per_sec": 0, 00:10:12.430 "w_mbytes_per_sec": 0 00:10:12.430 }, 00:10:12.430 "claimed": false, 00:10:12.430 "zoned": false, 00:10:12.430 "supported_io_types": { 00:10:12.430 "read": true, 00:10:12.430 "write": true, 00:10:12.430 "unmap": true, 00:10:12.430 "flush": true, 00:10:12.430 "reset": true, 00:10:12.430 "nvme_admin": false, 00:10:12.430 "nvme_io": false, 00:10:12.430 "nvme_io_md": false, 00:10:12.430 "write_zeroes": true, 00:10:12.430 "zcopy": true, 00:10:12.430 "get_zone_info": false, 00:10:12.430 "zone_management": false, 00:10:12.430 "zone_append": false, 00:10:12.430 "compare": false, 00:10:12.430 "compare_and_write": false, 00:10:12.430 "abort": true, 00:10:12.430 "seek_hole": false, 00:10:12.430 "seek_data": false, 00:10:12.430 "copy": true, 00:10:12.430 "nvme_iov_md": false 00:10:12.430 }, 00:10:12.430 "memory_domains": [ 00:10:12.430 { 00:10:12.430 "dma_device_id": "system", 00:10:12.430 "dma_device_type": 1 00:10:12.430 }, 00:10:12.430 { 00:10:12.430 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:12.430 "dma_device_type": 2 00:10:12.430 } 00:10:12.430 ], 00:10:12.430 "driver_specific": {} 00:10:12.430 } 00:10:12.430 ] 00:10:12.430 23:27:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:12.430 23:27:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:12.430 23:27:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:12.430 23:27:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:12.430 23:27:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:12.430 23:27:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:12.430 23:27:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:12.430 [2024-09-30 23:27:52.246721] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:12.430 [2024-09-30 23:27:52.246765] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:12.430 [2024-09-30 23:27:52.246801] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:12.430 [2024-09-30 23:27:52.248609] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:12.430 [2024-09-30 23:27:52.248679] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:12.430 23:27:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:12.430 23:27:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:12.430 23:27:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:12.430 23:27:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:12.430 23:27:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:12.430 23:27:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:12.430 23:27:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:12.430 23:27:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:12.430 23:27:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:12.430 23:27:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:12.430 23:27:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:12.430 23:27:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:12.430 23:27:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:12.430 23:27:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:12.430 23:27:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:12.430 23:27:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:12.689 23:27:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:12.689 "name": "Existed_Raid", 00:10:12.689 "uuid": "f6107192-18bb-47fe-896b-bb6cc00aa020", 00:10:12.689 "strip_size_kb": 64, 00:10:12.689 "state": "configuring", 00:10:12.689 "raid_level": "concat", 00:10:12.689 "superblock": true, 00:10:12.689 "num_base_bdevs": 4, 00:10:12.689 "num_base_bdevs_discovered": 3, 00:10:12.689 "num_base_bdevs_operational": 4, 00:10:12.689 "base_bdevs_list": [ 00:10:12.689 { 00:10:12.689 "name": "BaseBdev1", 00:10:12.689 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:12.689 "is_configured": false, 00:10:12.689 "data_offset": 0, 00:10:12.689 "data_size": 0 00:10:12.689 }, 00:10:12.689 { 00:10:12.689 "name": "BaseBdev2", 00:10:12.689 "uuid": "6d8d04a3-84a3-4dff-b050-ce222248d942", 00:10:12.689 "is_configured": true, 00:10:12.689 "data_offset": 2048, 00:10:12.689 "data_size": 63488 00:10:12.689 }, 00:10:12.689 { 00:10:12.689 "name": "BaseBdev3", 00:10:12.689 "uuid": "a4433538-c9f3-437a-ad02-51bff6124345", 00:10:12.689 "is_configured": true, 00:10:12.689 "data_offset": 2048, 00:10:12.689 "data_size": 63488 00:10:12.689 }, 00:10:12.689 { 00:10:12.689 "name": "BaseBdev4", 00:10:12.689 "uuid": "406d89f0-3c1e-4709-b2c2-a5fa56c1b2dd", 00:10:12.689 "is_configured": true, 00:10:12.689 "data_offset": 2048, 00:10:12.689 "data_size": 63488 00:10:12.689 } 00:10:12.689 ] 00:10:12.689 }' 00:10:12.689 23:27:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:12.689 23:27:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:12.948 23:27:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:10:12.948 23:27:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:12.948 23:27:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:12.948 [2024-09-30 23:27:52.685947] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:12.948 23:27:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:12.948 23:27:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:12.948 23:27:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:12.948 23:27:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:12.948 23:27:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:12.948 23:27:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:12.948 23:27:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:12.948 23:27:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:12.948 23:27:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:12.948 23:27:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:12.948 23:27:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:12.948 23:27:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:12.948 23:27:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:12.948 23:27:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:12.948 23:27:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:12.948 23:27:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:12.948 23:27:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:12.948 "name": "Existed_Raid", 00:10:12.948 "uuid": "f6107192-18bb-47fe-896b-bb6cc00aa020", 00:10:12.948 "strip_size_kb": 64, 00:10:12.948 "state": "configuring", 00:10:12.948 "raid_level": "concat", 00:10:12.948 "superblock": true, 00:10:12.948 "num_base_bdevs": 4, 00:10:12.948 "num_base_bdevs_discovered": 2, 00:10:12.948 "num_base_bdevs_operational": 4, 00:10:12.948 "base_bdevs_list": [ 00:10:12.948 { 00:10:12.948 "name": "BaseBdev1", 00:10:12.948 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:12.948 "is_configured": false, 00:10:12.948 "data_offset": 0, 00:10:12.948 "data_size": 0 00:10:12.948 }, 00:10:12.948 { 00:10:12.948 "name": null, 00:10:12.948 "uuid": "6d8d04a3-84a3-4dff-b050-ce222248d942", 00:10:12.948 "is_configured": false, 00:10:12.948 "data_offset": 0, 00:10:12.948 "data_size": 63488 00:10:12.948 }, 00:10:12.948 { 00:10:12.948 "name": "BaseBdev3", 00:10:12.948 "uuid": "a4433538-c9f3-437a-ad02-51bff6124345", 00:10:12.948 "is_configured": true, 00:10:12.948 "data_offset": 2048, 00:10:12.948 "data_size": 63488 00:10:12.948 }, 00:10:12.948 { 00:10:12.948 "name": "BaseBdev4", 00:10:12.948 "uuid": "406d89f0-3c1e-4709-b2c2-a5fa56c1b2dd", 00:10:12.948 "is_configured": true, 00:10:12.948 "data_offset": 2048, 00:10:12.948 "data_size": 63488 00:10:12.948 } 00:10:12.948 ] 00:10:12.948 }' 00:10:12.948 23:27:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:12.948 23:27:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:13.516 23:27:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:13.516 23:27:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:13.516 23:27:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:13.516 23:27:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:13.516 23:27:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:13.516 23:27:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:10:13.516 23:27:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:13.516 23:27:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:13.516 23:27:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:13.516 [2024-09-30 23:27:53.152463] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:13.516 BaseBdev1 00:10:13.516 23:27:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:13.516 23:27:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:10:13.516 23:27:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:10:13.516 23:27:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:13.516 23:27:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:13.516 23:27:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:13.516 23:27:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:13.516 23:27:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:13.516 23:27:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:13.517 23:27:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:13.517 23:27:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:13.517 23:27:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:13.517 23:27:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:13.517 23:27:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:13.517 [ 00:10:13.517 { 00:10:13.517 "name": "BaseBdev1", 00:10:13.517 "aliases": [ 00:10:13.517 "2bcf7dc7-5792-45f7-bd36-edea717aa75e" 00:10:13.517 ], 00:10:13.517 "product_name": "Malloc disk", 00:10:13.517 "block_size": 512, 00:10:13.517 "num_blocks": 65536, 00:10:13.517 "uuid": "2bcf7dc7-5792-45f7-bd36-edea717aa75e", 00:10:13.517 "assigned_rate_limits": { 00:10:13.517 "rw_ios_per_sec": 0, 00:10:13.517 "rw_mbytes_per_sec": 0, 00:10:13.517 "r_mbytes_per_sec": 0, 00:10:13.517 "w_mbytes_per_sec": 0 00:10:13.517 }, 00:10:13.517 "claimed": true, 00:10:13.517 "claim_type": "exclusive_write", 00:10:13.517 "zoned": false, 00:10:13.517 "supported_io_types": { 00:10:13.517 "read": true, 00:10:13.517 "write": true, 00:10:13.517 "unmap": true, 00:10:13.517 "flush": true, 00:10:13.517 "reset": true, 00:10:13.517 "nvme_admin": false, 00:10:13.517 "nvme_io": false, 00:10:13.517 "nvme_io_md": false, 00:10:13.517 "write_zeroes": true, 00:10:13.517 "zcopy": true, 00:10:13.517 "get_zone_info": false, 00:10:13.517 "zone_management": false, 00:10:13.517 "zone_append": false, 00:10:13.517 "compare": false, 00:10:13.517 "compare_and_write": false, 00:10:13.517 "abort": true, 00:10:13.517 "seek_hole": false, 00:10:13.517 "seek_data": false, 00:10:13.517 "copy": true, 00:10:13.517 "nvme_iov_md": false 00:10:13.517 }, 00:10:13.517 "memory_domains": [ 00:10:13.517 { 00:10:13.517 "dma_device_id": "system", 00:10:13.517 "dma_device_type": 1 00:10:13.517 }, 00:10:13.517 { 00:10:13.517 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:13.517 "dma_device_type": 2 00:10:13.517 } 00:10:13.517 ], 00:10:13.517 "driver_specific": {} 00:10:13.517 } 00:10:13.517 ] 00:10:13.517 23:27:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:13.517 23:27:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:13.517 23:27:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:13.517 23:27:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:13.517 23:27:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:13.517 23:27:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:13.517 23:27:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:13.517 23:27:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:13.517 23:27:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:13.517 23:27:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:13.517 23:27:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:13.517 23:27:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:13.517 23:27:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:13.517 23:27:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:13.517 23:27:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:13.517 23:27:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:13.517 23:27:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:13.517 23:27:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:13.517 "name": "Existed_Raid", 00:10:13.517 "uuid": "f6107192-18bb-47fe-896b-bb6cc00aa020", 00:10:13.517 "strip_size_kb": 64, 00:10:13.517 "state": "configuring", 00:10:13.517 "raid_level": "concat", 00:10:13.517 "superblock": true, 00:10:13.517 "num_base_bdevs": 4, 00:10:13.517 "num_base_bdevs_discovered": 3, 00:10:13.517 "num_base_bdevs_operational": 4, 00:10:13.517 "base_bdevs_list": [ 00:10:13.517 { 00:10:13.517 "name": "BaseBdev1", 00:10:13.517 "uuid": "2bcf7dc7-5792-45f7-bd36-edea717aa75e", 00:10:13.517 "is_configured": true, 00:10:13.517 "data_offset": 2048, 00:10:13.517 "data_size": 63488 00:10:13.517 }, 00:10:13.517 { 00:10:13.517 "name": null, 00:10:13.517 "uuid": "6d8d04a3-84a3-4dff-b050-ce222248d942", 00:10:13.517 "is_configured": false, 00:10:13.517 "data_offset": 0, 00:10:13.517 "data_size": 63488 00:10:13.517 }, 00:10:13.517 { 00:10:13.517 "name": "BaseBdev3", 00:10:13.517 "uuid": "a4433538-c9f3-437a-ad02-51bff6124345", 00:10:13.517 "is_configured": true, 00:10:13.517 "data_offset": 2048, 00:10:13.517 "data_size": 63488 00:10:13.517 }, 00:10:13.517 { 00:10:13.517 "name": "BaseBdev4", 00:10:13.517 "uuid": "406d89f0-3c1e-4709-b2c2-a5fa56c1b2dd", 00:10:13.517 "is_configured": true, 00:10:13.517 "data_offset": 2048, 00:10:13.517 "data_size": 63488 00:10:13.517 } 00:10:13.517 ] 00:10:13.517 }' 00:10:13.517 23:27:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:13.517 23:27:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:14.084 23:27:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:14.084 23:27:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:14.084 23:27:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.084 23:27:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:14.084 23:27:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.084 23:27:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:10:14.085 23:27:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:10:14.085 23:27:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.085 23:27:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:14.085 [2024-09-30 23:27:53.691591] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:14.085 23:27:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.085 23:27:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:14.085 23:27:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:14.085 23:27:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:14.085 23:27:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:14.085 23:27:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:14.085 23:27:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:14.085 23:27:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:14.085 23:27:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:14.085 23:27:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:14.085 23:27:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:14.085 23:27:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:14.085 23:27:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:14.085 23:27:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.085 23:27:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:14.085 23:27:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.085 23:27:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:14.085 "name": "Existed_Raid", 00:10:14.085 "uuid": "f6107192-18bb-47fe-896b-bb6cc00aa020", 00:10:14.085 "strip_size_kb": 64, 00:10:14.085 "state": "configuring", 00:10:14.085 "raid_level": "concat", 00:10:14.085 "superblock": true, 00:10:14.085 "num_base_bdevs": 4, 00:10:14.085 "num_base_bdevs_discovered": 2, 00:10:14.085 "num_base_bdevs_operational": 4, 00:10:14.085 "base_bdevs_list": [ 00:10:14.085 { 00:10:14.085 "name": "BaseBdev1", 00:10:14.085 "uuid": "2bcf7dc7-5792-45f7-bd36-edea717aa75e", 00:10:14.085 "is_configured": true, 00:10:14.085 "data_offset": 2048, 00:10:14.085 "data_size": 63488 00:10:14.085 }, 00:10:14.085 { 00:10:14.085 "name": null, 00:10:14.085 "uuid": "6d8d04a3-84a3-4dff-b050-ce222248d942", 00:10:14.085 "is_configured": false, 00:10:14.085 "data_offset": 0, 00:10:14.085 "data_size": 63488 00:10:14.085 }, 00:10:14.085 { 00:10:14.085 "name": null, 00:10:14.085 "uuid": "a4433538-c9f3-437a-ad02-51bff6124345", 00:10:14.085 "is_configured": false, 00:10:14.085 "data_offset": 0, 00:10:14.085 "data_size": 63488 00:10:14.085 }, 00:10:14.085 { 00:10:14.085 "name": "BaseBdev4", 00:10:14.085 "uuid": "406d89f0-3c1e-4709-b2c2-a5fa56c1b2dd", 00:10:14.085 "is_configured": true, 00:10:14.085 "data_offset": 2048, 00:10:14.085 "data_size": 63488 00:10:14.085 } 00:10:14.085 ] 00:10:14.085 }' 00:10:14.085 23:27:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:14.085 23:27:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:14.344 23:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:14.344 23:27:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.344 23:27:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:14.344 23:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:14.344 23:27:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.344 23:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:10:14.344 23:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:10:14.344 23:27:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.344 23:27:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:14.603 [2024-09-30 23:27:54.198827] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:14.603 23:27:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.603 23:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:14.603 23:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:14.603 23:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:14.603 23:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:14.603 23:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:14.603 23:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:14.603 23:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:14.603 23:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:14.603 23:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:14.603 23:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:14.603 23:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:14.603 23:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:14.603 23:27:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.603 23:27:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:14.603 23:27:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.603 23:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:14.603 "name": "Existed_Raid", 00:10:14.603 "uuid": "f6107192-18bb-47fe-896b-bb6cc00aa020", 00:10:14.603 "strip_size_kb": 64, 00:10:14.603 "state": "configuring", 00:10:14.603 "raid_level": "concat", 00:10:14.603 "superblock": true, 00:10:14.603 "num_base_bdevs": 4, 00:10:14.603 "num_base_bdevs_discovered": 3, 00:10:14.603 "num_base_bdevs_operational": 4, 00:10:14.603 "base_bdevs_list": [ 00:10:14.603 { 00:10:14.603 "name": "BaseBdev1", 00:10:14.603 "uuid": "2bcf7dc7-5792-45f7-bd36-edea717aa75e", 00:10:14.603 "is_configured": true, 00:10:14.603 "data_offset": 2048, 00:10:14.603 "data_size": 63488 00:10:14.603 }, 00:10:14.603 { 00:10:14.603 "name": null, 00:10:14.603 "uuid": "6d8d04a3-84a3-4dff-b050-ce222248d942", 00:10:14.603 "is_configured": false, 00:10:14.603 "data_offset": 0, 00:10:14.603 "data_size": 63488 00:10:14.603 }, 00:10:14.603 { 00:10:14.603 "name": "BaseBdev3", 00:10:14.603 "uuid": "a4433538-c9f3-437a-ad02-51bff6124345", 00:10:14.603 "is_configured": true, 00:10:14.603 "data_offset": 2048, 00:10:14.603 "data_size": 63488 00:10:14.603 }, 00:10:14.603 { 00:10:14.603 "name": "BaseBdev4", 00:10:14.603 "uuid": "406d89f0-3c1e-4709-b2c2-a5fa56c1b2dd", 00:10:14.603 "is_configured": true, 00:10:14.603 "data_offset": 2048, 00:10:14.603 "data_size": 63488 00:10:14.603 } 00:10:14.603 ] 00:10:14.603 }' 00:10:14.603 23:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:14.603 23:27:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:14.862 23:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:14.862 23:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:14.862 23:27:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.862 23:27:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:14.863 23:27:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.863 23:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:10:14.863 23:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:14.863 23:27:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.863 23:27:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:14.863 [2024-09-30 23:27:54.662029] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:14.863 23:27:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.863 23:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:14.863 23:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:14.863 23:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:14.863 23:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:14.863 23:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:14.863 23:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:14.863 23:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:14.863 23:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:14.863 23:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:14.863 23:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:14.863 23:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:14.863 23:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:14.863 23:27:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.863 23:27:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:14.863 23:27:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.863 23:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:14.863 "name": "Existed_Raid", 00:10:14.863 "uuid": "f6107192-18bb-47fe-896b-bb6cc00aa020", 00:10:14.863 "strip_size_kb": 64, 00:10:14.863 "state": "configuring", 00:10:14.863 "raid_level": "concat", 00:10:14.863 "superblock": true, 00:10:14.863 "num_base_bdevs": 4, 00:10:14.863 "num_base_bdevs_discovered": 2, 00:10:14.863 "num_base_bdevs_operational": 4, 00:10:14.863 "base_bdevs_list": [ 00:10:14.863 { 00:10:14.863 "name": null, 00:10:14.863 "uuid": "2bcf7dc7-5792-45f7-bd36-edea717aa75e", 00:10:14.863 "is_configured": false, 00:10:14.863 "data_offset": 0, 00:10:14.863 "data_size": 63488 00:10:14.863 }, 00:10:14.863 { 00:10:14.863 "name": null, 00:10:14.863 "uuid": "6d8d04a3-84a3-4dff-b050-ce222248d942", 00:10:14.863 "is_configured": false, 00:10:14.863 "data_offset": 0, 00:10:14.863 "data_size": 63488 00:10:14.863 }, 00:10:14.863 { 00:10:14.863 "name": "BaseBdev3", 00:10:14.863 "uuid": "a4433538-c9f3-437a-ad02-51bff6124345", 00:10:14.863 "is_configured": true, 00:10:14.863 "data_offset": 2048, 00:10:14.863 "data_size": 63488 00:10:14.863 }, 00:10:14.863 { 00:10:14.863 "name": "BaseBdev4", 00:10:14.863 "uuid": "406d89f0-3c1e-4709-b2c2-a5fa56c1b2dd", 00:10:14.863 "is_configured": true, 00:10:14.863 "data_offset": 2048, 00:10:14.863 "data_size": 63488 00:10:14.863 } 00:10:14.863 ] 00:10:14.863 }' 00:10:14.863 23:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:14.863 23:27:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:15.430 23:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:15.431 23:27:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:15.431 23:27:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:15.431 23:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:15.431 23:27:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:15.431 23:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:10:15.431 23:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:10:15.431 23:27:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:15.431 23:27:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:15.431 [2024-09-30 23:27:55.099923] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:15.431 23:27:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:15.431 23:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:15.431 23:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:15.431 23:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:15.431 23:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:15.431 23:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:15.431 23:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:15.431 23:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:15.431 23:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:15.431 23:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:15.431 23:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:15.431 23:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:15.431 23:27:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:15.431 23:27:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:15.431 23:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:15.431 23:27:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:15.431 23:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:15.431 "name": "Existed_Raid", 00:10:15.431 "uuid": "f6107192-18bb-47fe-896b-bb6cc00aa020", 00:10:15.431 "strip_size_kb": 64, 00:10:15.431 "state": "configuring", 00:10:15.431 "raid_level": "concat", 00:10:15.431 "superblock": true, 00:10:15.431 "num_base_bdevs": 4, 00:10:15.431 "num_base_bdevs_discovered": 3, 00:10:15.431 "num_base_bdevs_operational": 4, 00:10:15.431 "base_bdevs_list": [ 00:10:15.431 { 00:10:15.431 "name": null, 00:10:15.431 "uuid": "2bcf7dc7-5792-45f7-bd36-edea717aa75e", 00:10:15.431 "is_configured": false, 00:10:15.431 "data_offset": 0, 00:10:15.431 "data_size": 63488 00:10:15.431 }, 00:10:15.431 { 00:10:15.431 "name": "BaseBdev2", 00:10:15.431 "uuid": "6d8d04a3-84a3-4dff-b050-ce222248d942", 00:10:15.431 "is_configured": true, 00:10:15.431 "data_offset": 2048, 00:10:15.431 "data_size": 63488 00:10:15.431 }, 00:10:15.431 { 00:10:15.431 "name": "BaseBdev3", 00:10:15.431 "uuid": "a4433538-c9f3-437a-ad02-51bff6124345", 00:10:15.431 "is_configured": true, 00:10:15.431 "data_offset": 2048, 00:10:15.431 "data_size": 63488 00:10:15.431 }, 00:10:15.431 { 00:10:15.431 "name": "BaseBdev4", 00:10:15.431 "uuid": "406d89f0-3c1e-4709-b2c2-a5fa56c1b2dd", 00:10:15.431 "is_configured": true, 00:10:15.431 "data_offset": 2048, 00:10:15.431 "data_size": 63488 00:10:15.431 } 00:10:15.431 ] 00:10:15.431 }' 00:10:15.431 23:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:15.431 23:27:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:16.000 23:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:16.000 23:27:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.000 23:27:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:16.000 23:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:16.000 23:27:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.000 23:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:10:16.000 23:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:16.000 23:27:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.000 23:27:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:16.000 23:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:10:16.000 23:27:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.000 23:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 2bcf7dc7-5792-45f7-bd36-edea717aa75e 00:10:16.000 23:27:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.000 23:27:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:16.000 [2024-09-30 23:27:55.654090] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:10:16.000 [2024-09-30 23:27:55.654295] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:10:16.000 [2024-09-30 23:27:55.654308] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:16.000 [2024-09-30 23:27:55.654561] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:10:16.000 [2024-09-30 23:27:55.654678] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:10:16.000 [2024-09-30 23:27:55.654691] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006d00 00:10:16.000 [2024-09-30 23:27:55.654785] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:16.000 NewBaseBdev 00:10:16.000 23:27:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.000 23:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:10:16.000 23:27:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:10:16.000 23:27:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:16.000 23:27:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:16.000 23:27:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:16.000 23:27:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:16.000 23:27:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:16.000 23:27:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.000 23:27:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:16.000 23:27:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.000 23:27:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:10:16.000 23:27:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.000 23:27:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:16.000 [ 00:10:16.000 { 00:10:16.000 "name": "NewBaseBdev", 00:10:16.000 "aliases": [ 00:10:16.000 "2bcf7dc7-5792-45f7-bd36-edea717aa75e" 00:10:16.000 ], 00:10:16.000 "product_name": "Malloc disk", 00:10:16.000 "block_size": 512, 00:10:16.000 "num_blocks": 65536, 00:10:16.000 "uuid": "2bcf7dc7-5792-45f7-bd36-edea717aa75e", 00:10:16.000 "assigned_rate_limits": { 00:10:16.000 "rw_ios_per_sec": 0, 00:10:16.000 "rw_mbytes_per_sec": 0, 00:10:16.000 "r_mbytes_per_sec": 0, 00:10:16.000 "w_mbytes_per_sec": 0 00:10:16.000 }, 00:10:16.000 "claimed": true, 00:10:16.000 "claim_type": "exclusive_write", 00:10:16.000 "zoned": false, 00:10:16.000 "supported_io_types": { 00:10:16.000 "read": true, 00:10:16.000 "write": true, 00:10:16.000 "unmap": true, 00:10:16.000 "flush": true, 00:10:16.000 "reset": true, 00:10:16.000 "nvme_admin": false, 00:10:16.000 "nvme_io": false, 00:10:16.000 "nvme_io_md": false, 00:10:16.000 "write_zeroes": true, 00:10:16.000 "zcopy": true, 00:10:16.000 "get_zone_info": false, 00:10:16.000 "zone_management": false, 00:10:16.000 "zone_append": false, 00:10:16.000 "compare": false, 00:10:16.000 "compare_and_write": false, 00:10:16.000 "abort": true, 00:10:16.000 "seek_hole": false, 00:10:16.000 "seek_data": false, 00:10:16.000 "copy": true, 00:10:16.000 "nvme_iov_md": false 00:10:16.000 }, 00:10:16.000 "memory_domains": [ 00:10:16.000 { 00:10:16.000 "dma_device_id": "system", 00:10:16.000 "dma_device_type": 1 00:10:16.000 }, 00:10:16.000 { 00:10:16.000 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:16.000 "dma_device_type": 2 00:10:16.000 } 00:10:16.000 ], 00:10:16.000 "driver_specific": {} 00:10:16.000 } 00:10:16.000 ] 00:10:16.000 23:27:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.000 23:27:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:16.000 23:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:10:16.000 23:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:16.000 23:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:16.000 23:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:16.000 23:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:16.000 23:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:16.000 23:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:16.000 23:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:16.000 23:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:16.000 23:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:16.000 23:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:16.000 23:27:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.000 23:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:16.000 23:27:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:16.000 23:27:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.000 23:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:16.000 "name": "Existed_Raid", 00:10:16.000 "uuid": "f6107192-18bb-47fe-896b-bb6cc00aa020", 00:10:16.000 "strip_size_kb": 64, 00:10:16.000 "state": "online", 00:10:16.000 "raid_level": "concat", 00:10:16.000 "superblock": true, 00:10:16.000 "num_base_bdevs": 4, 00:10:16.000 "num_base_bdevs_discovered": 4, 00:10:16.000 "num_base_bdevs_operational": 4, 00:10:16.000 "base_bdevs_list": [ 00:10:16.000 { 00:10:16.000 "name": "NewBaseBdev", 00:10:16.000 "uuid": "2bcf7dc7-5792-45f7-bd36-edea717aa75e", 00:10:16.000 "is_configured": true, 00:10:16.000 "data_offset": 2048, 00:10:16.000 "data_size": 63488 00:10:16.000 }, 00:10:16.000 { 00:10:16.000 "name": "BaseBdev2", 00:10:16.000 "uuid": "6d8d04a3-84a3-4dff-b050-ce222248d942", 00:10:16.000 "is_configured": true, 00:10:16.000 "data_offset": 2048, 00:10:16.000 "data_size": 63488 00:10:16.000 }, 00:10:16.000 { 00:10:16.000 "name": "BaseBdev3", 00:10:16.000 "uuid": "a4433538-c9f3-437a-ad02-51bff6124345", 00:10:16.000 "is_configured": true, 00:10:16.000 "data_offset": 2048, 00:10:16.000 "data_size": 63488 00:10:16.000 }, 00:10:16.000 { 00:10:16.000 "name": "BaseBdev4", 00:10:16.000 "uuid": "406d89f0-3c1e-4709-b2c2-a5fa56c1b2dd", 00:10:16.000 "is_configured": true, 00:10:16.000 "data_offset": 2048, 00:10:16.000 "data_size": 63488 00:10:16.000 } 00:10:16.000 ] 00:10:16.000 }' 00:10:16.000 23:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:16.000 23:27:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:16.259 23:27:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:10:16.259 23:27:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:16.259 23:27:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:16.259 23:27:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:16.259 23:27:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:10:16.259 23:27:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:16.259 23:27:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:16.259 23:27:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.259 23:27:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:16.259 23:27:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:16.259 [2024-09-30 23:27:56.089761] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:16.259 23:27:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.519 23:27:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:16.519 "name": "Existed_Raid", 00:10:16.519 "aliases": [ 00:10:16.519 "f6107192-18bb-47fe-896b-bb6cc00aa020" 00:10:16.519 ], 00:10:16.519 "product_name": "Raid Volume", 00:10:16.519 "block_size": 512, 00:10:16.519 "num_blocks": 253952, 00:10:16.519 "uuid": "f6107192-18bb-47fe-896b-bb6cc00aa020", 00:10:16.519 "assigned_rate_limits": { 00:10:16.519 "rw_ios_per_sec": 0, 00:10:16.519 "rw_mbytes_per_sec": 0, 00:10:16.519 "r_mbytes_per_sec": 0, 00:10:16.519 "w_mbytes_per_sec": 0 00:10:16.519 }, 00:10:16.519 "claimed": false, 00:10:16.519 "zoned": false, 00:10:16.519 "supported_io_types": { 00:10:16.519 "read": true, 00:10:16.519 "write": true, 00:10:16.519 "unmap": true, 00:10:16.519 "flush": true, 00:10:16.519 "reset": true, 00:10:16.519 "nvme_admin": false, 00:10:16.519 "nvme_io": false, 00:10:16.519 "nvme_io_md": false, 00:10:16.519 "write_zeroes": true, 00:10:16.519 "zcopy": false, 00:10:16.519 "get_zone_info": false, 00:10:16.519 "zone_management": false, 00:10:16.519 "zone_append": false, 00:10:16.519 "compare": false, 00:10:16.519 "compare_and_write": false, 00:10:16.519 "abort": false, 00:10:16.519 "seek_hole": false, 00:10:16.519 "seek_data": false, 00:10:16.519 "copy": false, 00:10:16.519 "nvme_iov_md": false 00:10:16.519 }, 00:10:16.519 "memory_domains": [ 00:10:16.519 { 00:10:16.519 "dma_device_id": "system", 00:10:16.519 "dma_device_type": 1 00:10:16.519 }, 00:10:16.519 { 00:10:16.519 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:16.519 "dma_device_type": 2 00:10:16.519 }, 00:10:16.519 { 00:10:16.519 "dma_device_id": "system", 00:10:16.519 "dma_device_type": 1 00:10:16.519 }, 00:10:16.519 { 00:10:16.519 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:16.519 "dma_device_type": 2 00:10:16.519 }, 00:10:16.519 { 00:10:16.519 "dma_device_id": "system", 00:10:16.519 "dma_device_type": 1 00:10:16.519 }, 00:10:16.519 { 00:10:16.519 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:16.519 "dma_device_type": 2 00:10:16.519 }, 00:10:16.519 { 00:10:16.519 "dma_device_id": "system", 00:10:16.519 "dma_device_type": 1 00:10:16.519 }, 00:10:16.519 { 00:10:16.519 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:16.519 "dma_device_type": 2 00:10:16.519 } 00:10:16.519 ], 00:10:16.519 "driver_specific": { 00:10:16.519 "raid": { 00:10:16.519 "uuid": "f6107192-18bb-47fe-896b-bb6cc00aa020", 00:10:16.519 "strip_size_kb": 64, 00:10:16.519 "state": "online", 00:10:16.519 "raid_level": "concat", 00:10:16.519 "superblock": true, 00:10:16.519 "num_base_bdevs": 4, 00:10:16.519 "num_base_bdevs_discovered": 4, 00:10:16.519 "num_base_bdevs_operational": 4, 00:10:16.519 "base_bdevs_list": [ 00:10:16.519 { 00:10:16.519 "name": "NewBaseBdev", 00:10:16.519 "uuid": "2bcf7dc7-5792-45f7-bd36-edea717aa75e", 00:10:16.519 "is_configured": true, 00:10:16.519 "data_offset": 2048, 00:10:16.519 "data_size": 63488 00:10:16.519 }, 00:10:16.519 { 00:10:16.519 "name": "BaseBdev2", 00:10:16.519 "uuid": "6d8d04a3-84a3-4dff-b050-ce222248d942", 00:10:16.519 "is_configured": true, 00:10:16.519 "data_offset": 2048, 00:10:16.519 "data_size": 63488 00:10:16.519 }, 00:10:16.519 { 00:10:16.519 "name": "BaseBdev3", 00:10:16.519 "uuid": "a4433538-c9f3-437a-ad02-51bff6124345", 00:10:16.519 "is_configured": true, 00:10:16.519 "data_offset": 2048, 00:10:16.519 "data_size": 63488 00:10:16.519 }, 00:10:16.519 { 00:10:16.519 "name": "BaseBdev4", 00:10:16.519 "uuid": "406d89f0-3c1e-4709-b2c2-a5fa56c1b2dd", 00:10:16.519 "is_configured": true, 00:10:16.519 "data_offset": 2048, 00:10:16.519 "data_size": 63488 00:10:16.519 } 00:10:16.519 ] 00:10:16.519 } 00:10:16.519 } 00:10:16.519 }' 00:10:16.519 23:27:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:16.519 23:27:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:10:16.519 BaseBdev2 00:10:16.519 BaseBdev3 00:10:16.519 BaseBdev4' 00:10:16.519 23:27:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:16.519 23:27:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:16.519 23:27:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:16.519 23:27:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:16.519 23:27:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:10:16.519 23:27:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.519 23:27:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:16.519 23:27:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.520 23:27:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:16.520 23:27:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:16.520 23:27:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:16.520 23:27:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:16.520 23:27:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:16.520 23:27:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.520 23:27:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:16.520 23:27:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.520 23:27:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:16.520 23:27:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:16.520 23:27:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:16.520 23:27:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:16.520 23:27:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:16.520 23:27:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.520 23:27:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:16.520 23:27:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.520 23:27:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:16.520 23:27:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:16.520 23:27:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:16.520 23:27:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:16.520 23:27:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:16.520 23:27:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.520 23:27:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:16.520 23:27:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.520 23:27:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:16.520 23:27:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:16.520 23:27:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:16.520 23:27:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.520 23:27:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:16.520 [2024-09-30 23:27:56.360961] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:16.520 [2024-09-30 23:27:56.360991] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:16.520 [2024-09-30 23:27:56.361070] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:16.520 [2024-09-30 23:27:56.361136] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:16.520 [2024-09-30 23:27:56.361157] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name Existed_Raid, state offline 00:10:16.520 23:27:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.520 23:27:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 82862 00:10:16.520 23:27:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 82862 ']' 00:10:16.520 23:27:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 82862 00:10:16.520 23:27:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:10:16.779 23:27:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:16.779 23:27:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 82862 00:10:16.779 killing process with pid 82862 00:10:16.779 23:27:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:16.779 23:27:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:16.779 23:27:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 82862' 00:10:16.779 23:27:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 82862 00:10:16.779 [2024-09-30 23:27:56.412437] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:16.779 23:27:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 82862 00:10:16.779 [2024-09-30 23:27:56.453025] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:17.038 23:27:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:10:17.038 00:10:17.038 real 0m9.271s 00:10:17.038 user 0m15.783s 00:10:17.038 sys 0m1.958s 00:10:17.038 23:27:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:17.038 23:27:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:17.038 ************************************ 00:10:17.038 END TEST raid_state_function_test_sb 00:10:17.038 ************************************ 00:10:17.038 23:27:56 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 4 00:10:17.038 23:27:56 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:10:17.038 23:27:56 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:17.038 23:27:56 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:17.038 ************************************ 00:10:17.038 START TEST raid_superblock_test 00:10:17.038 ************************************ 00:10:17.038 23:27:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test concat 4 00:10:17.038 23:27:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:10:17.038 23:27:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:10:17.038 23:27:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:10:17.038 23:27:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:10:17.038 23:27:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:10:17.038 23:27:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:10:17.038 23:27:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:10:17.038 23:27:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:10:17.038 23:27:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:10:17.038 23:27:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:10:17.038 23:27:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:10:17.038 23:27:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:10:17.038 23:27:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:10:17.038 23:27:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:10:17.038 23:27:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:10:17.038 23:27:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:10:17.038 23:27:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=83506 00:10:17.038 23:27:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:10:17.038 23:27:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 83506 00:10:17.038 23:27:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 83506 ']' 00:10:17.038 23:27:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:17.038 23:27:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:17.038 23:27:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:17.038 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:17.038 23:27:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:17.038 23:27:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.038 [2024-09-30 23:27:56.858496] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:10:17.038 [2024-09-30 23:27:56.858710] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83506 ] 00:10:17.297 [2024-09-30 23:27:57.000627] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:17.297 [2024-09-30 23:27:57.043629] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:17.297 [2024-09-30 23:27:57.085665] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:17.297 [2024-09-30 23:27:57.085778] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:18.234 23:27:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:18.234 23:27:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:10:18.234 23:27:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:10:18.234 23:27:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:18.234 23:27:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:10:18.234 23:27:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:10:18.234 23:27:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:10:18.234 23:27:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:18.234 23:27:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:18.234 23:27:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:18.234 23:27:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:10:18.234 23:27:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.234 23:27:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.234 malloc1 00:10:18.234 23:27:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.234 23:27:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:18.234 23:27:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.234 23:27:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.234 [2024-09-30 23:27:57.748112] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:18.234 [2024-09-30 23:27:57.748260] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:18.234 [2024-09-30 23:27:57.748312] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:10:18.234 [2024-09-30 23:27:57.748358] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:18.234 [2024-09-30 23:27:57.750527] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:18.234 [2024-09-30 23:27:57.750608] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:18.234 pt1 00:10:18.234 23:27:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.234 23:27:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:18.234 23:27:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:18.234 23:27:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:10:18.234 23:27:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:10:18.234 23:27:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:10:18.234 23:27:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:18.234 23:27:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:18.234 23:27:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:18.234 23:27:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:10:18.234 23:27:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.234 23:27:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.234 malloc2 00:10:18.234 23:27:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.234 23:27:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:18.234 23:27:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.234 23:27:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.234 [2024-09-30 23:27:57.786803] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:18.234 [2024-09-30 23:27:57.786925] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:18.234 [2024-09-30 23:27:57.786949] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:10:18.234 [2024-09-30 23:27:57.786964] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:18.234 [2024-09-30 23:27:57.789548] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:18.234 [2024-09-30 23:27:57.789593] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:18.234 pt2 00:10:18.234 23:27:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.234 23:27:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:18.234 23:27:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:18.234 23:27:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:10:18.234 23:27:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:10:18.234 23:27:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:10:18.234 23:27:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:18.234 23:27:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:18.234 23:27:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:18.234 23:27:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:10:18.234 23:27:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.234 23:27:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.234 malloc3 00:10:18.234 23:27:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.234 23:27:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:18.234 23:27:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.234 23:27:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.234 [2024-09-30 23:27:57.815183] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:18.234 [2024-09-30 23:27:57.815274] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:18.234 [2024-09-30 23:27:57.815307] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:10:18.234 [2024-09-30 23:27:57.815335] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:18.234 [2024-09-30 23:27:57.817386] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:18.235 [2024-09-30 23:27:57.817457] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:18.235 pt3 00:10:18.235 23:27:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.235 23:27:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:18.235 23:27:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:18.235 23:27:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:10:18.235 23:27:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:10:18.235 23:27:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:10:18.235 23:27:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:18.235 23:27:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:18.235 23:27:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:18.235 23:27:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:10:18.235 23:27:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.235 23:27:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.235 malloc4 00:10:18.235 23:27:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.235 23:27:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:10:18.235 23:27:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.235 23:27:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.235 [2024-09-30 23:27:57.847562] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:10:18.235 [2024-09-30 23:27:57.847650] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:18.235 [2024-09-30 23:27:57.847681] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:10:18.235 [2024-09-30 23:27:57.847711] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:18.235 [2024-09-30 23:27:57.849745] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:18.235 [2024-09-30 23:27:57.849832] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:10:18.235 pt4 00:10:18.235 23:27:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.235 23:27:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:18.235 23:27:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:18.235 23:27:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:10:18.235 23:27:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.235 23:27:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.235 [2024-09-30 23:27:57.859619] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:18.235 [2024-09-30 23:27:57.861445] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:18.235 [2024-09-30 23:27:57.861537] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:18.235 [2024-09-30 23:27:57.861616] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:10:18.235 [2024-09-30 23:27:57.861817] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:10:18.235 [2024-09-30 23:27:57.861890] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:18.235 [2024-09-30 23:27:57.862163] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:10:18.235 [2024-09-30 23:27:57.862331] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:10:18.235 [2024-09-30 23:27:57.862376] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:10:18.235 [2024-09-30 23:27:57.862525] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:18.235 23:27:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.235 23:27:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:10:18.235 23:27:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:18.235 23:27:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:18.235 23:27:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:18.235 23:27:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:18.235 23:27:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:18.235 23:27:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:18.235 23:27:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:18.235 23:27:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:18.235 23:27:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:18.235 23:27:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:18.235 23:27:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:18.235 23:27:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.235 23:27:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.235 23:27:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.235 23:27:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:18.235 "name": "raid_bdev1", 00:10:18.235 "uuid": "2748b40a-1e47-4e45-82bc-04a5838e8bb1", 00:10:18.235 "strip_size_kb": 64, 00:10:18.235 "state": "online", 00:10:18.235 "raid_level": "concat", 00:10:18.235 "superblock": true, 00:10:18.235 "num_base_bdevs": 4, 00:10:18.235 "num_base_bdevs_discovered": 4, 00:10:18.235 "num_base_bdevs_operational": 4, 00:10:18.235 "base_bdevs_list": [ 00:10:18.235 { 00:10:18.235 "name": "pt1", 00:10:18.235 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:18.235 "is_configured": true, 00:10:18.235 "data_offset": 2048, 00:10:18.235 "data_size": 63488 00:10:18.235 }, 00:10:18.235 { 00:10:18.235 "name": "pt2", 00:10:18.235 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:18.235 "is_configured": true, 00:10:18.235 "data_offset": 2048, 00:10:18.235 "data_size": 63488 00:10:18.235 }, 00:10:18.235 { 00:10:18.235 "name": "pt3", 00:10:18.235 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:18.235 "is_configured": true, 00:10:18.235 "data_offset": 2048, 00:10:18.235 "data_size": 63488 00:10:18.235 }, 00:10:18.235 { 00:10:18.235 "name": "pt4", 00:10:18.235 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:18.235 "is_configured": true, 00:10:18.235 "data_offset": 2048, 00:10:18.235 "data_size": 63488 00:10:18.235 } 00:10:18.235 ] 00:10:18.235 }' 00:10:18.235 23:27:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:18.235 23:27:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.493 23:27:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:10:18.493 23:27:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:10:18.493 23:27:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:18.493 23:27:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:18.493 23:27:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:18.493 23:27:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:18.493 23:27:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:18.493 23:27:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:18.493 23:27:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.493 23:27:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.493 [2024-09-30 23:27:58.331138] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:18.750 23:27:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.750 23:27:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:18.750 "name": "raid_bdev1", 00:10:18.750 "aliases": [ 00:10:18.750 "2748b40a-1e47-4e45-82bc-04a5838e8bb1" 00:10:18.750 ], 00:10:18.750 "product_name": "Raid Volume", 00:10:18.750 "block_size": 512, 00:10:18.750 "num_blocks": 253952, 00:10:18.750 "uuid": "2748b40a-1e47-4e45-82bc-04a5838e8bb1", 00:10:18.750 "assigned_rate_limits": { 00:10:18.750 "rw_ios_per_sec": 0, 00:10:18.750 "rw_mbytes_per_sec": 0, 00:10:18.750 "r_mbytes_per_sec": 0, 00:10:18.750 "w_mbytes_per_sec": 0 00:10:18.750 }, 00:10:18.750 "claimed": false, 00:10:18.750 "zoned": false, 00:10:18.750 "supported_io_types": { 00:10:18.750 "read": true, 00:10:18.750 "write": true, 00:10:18.750 "unmap": true, 00:10:18.750 "flush": true, 00:10:18.750 "reset": true, 00:10:18.750 "nvme_admin": false, 00:10:18.750 "nvme_io": false, 00:10:18.750 "nvme_io_md": false, 00:10:18.750 "write_zeroes": true, 00:10:18.750 "zcopy": false, 00:10:18.750 "get_zone_info": false, 00:10:18.750 "zone_management": false, 00:10:18.750 "zone_append": false, 00:10:18.750 "compare": false, 00:10:18.750 "compare_and_write": false, 00:10:18.750 "abort": false, 00:10:18.750 "seek_hole": false, 00:10:18.750 "seek_data": false, 00:10:18.750 "copy": false, 00:10:18.750 "nvme_iov_md": false 00:10:18.750 }, 00:10:18.750 "memory_domains": [ 00:10:18.750 { 00:10:18.750 "dma_device_id": "system", 00:10:18.750 "dma_device_type": 1 00:10:18.750 }, 00:10:18.750 { 00:10:18.750 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:18.750 "dma_device_type": 2 00:10:18.750 }, 00:10:18.750 { 00:10:18.750 "dma_device_id": "system", 00:10:18.750 "dma_device_type": 1 00:10:18.750 }, 00:10:18.750 { 00:10:18.750 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:18.750 "dma_device_type": 2 00:10:18.750 }, 00:10:18.750 { 00:10:18.750 "dma_device_id": "system", 00:10:18.750 "dma_device_type": 1 00:10:18.750 }, 00:10:18.750 { 00:10:18.750 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:18.750 "dma_device_type": 2 00:10:18.750 }, 00:10:18.750 { 00:10:18.750 "dma_device_id": "system", 00:10:18.750 "dma_device_type": 1 00:10:18.750 }, 00:10:18.750 { 00:10:18.750 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:18.750 "dma_device_type": 2 00:10:18.750 } 00:10:18.750 ], 00:10:18.750 "driver_specific": { 00:10:18.750 "raid": { 00:10:18.750 "uuid": "2748b40a-1e47-4e45-82bc-04a5838e8bb1", 00:10:18.750 "strip_size_kb": 64, 00:10:18.750 "state": "online", 00:10:18.750 "raid_level": "concat", 00:10:18.750 "superblock": true, 00:10:18.750 "num_base_bdevs": 4, 00:10:18.750 "num_base_bdevs_discovered": 4, 00:10:18.750 "num_base_bdevs_operational": 4, 00:10:18.750 "base_bdevs_list": [ 00:10:18.750 { 00:10:18.750 "name": "pt1", 00:10:18.750 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:18.750 "is_configured": true, 00:10:18.750 "data_offset": 2048, 00:10:18.750 "data_size": 63488 00:10:18.750 }, 00:10:18.750 { 00:10:18.750 "name": "pt2", 00:10:18.750 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:18.750 "is_configured": true, 00:10:18.750 "data_offset": 2048, 00:10:18.750 "data_size": 63488 00:10:18.750 }, 00:10:18.750 { 00:10:18.750 "name": "pt3", 00:10:18.750 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:18.750 "is_configured": true, 00:10:18.750 "data_offset": 2048, 00:10:18.750 "data_size": 63488 00:10:18.750 }, 00:10:18.750 { 00:10:18.750 "name": "pt4", 00:10:18.750 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:18.750 "is_configured": true, 00:10:18.750 "data_offset": 2048, 00:10:18.750 "data_size": 63488 00:10:18.750 } 00:10:18.750 ] 00:10:18.750 } 00:10:18.750 } 00:10:18.750 }' 00:10:18.750 23:27:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:18.750 23:27:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:10:18.750 pt2 00:10:18.750 pt3 00:10:18.750 pt4' 00:10:18.750 23:27:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:18.750 23:27:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:18.750 23:27:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:18.750 23:27:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:10:18.750 23:27:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.750 23:27:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.750 23:27:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:18.750 23:27:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.750 23:27:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:18.750 23:27:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:18.750 23:27:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:18.750 23:27:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:10:18.750 23:27:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:18.750 23:27:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.750 23:27:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.750 23:27:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.750 23:27:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:18.750 23:27:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:18.750 23:27:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:18.750 23:27:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:18.750 23:27:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:10:18.750 23:27:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.750 23:27:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.750 23:27:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.750 23:27:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:18.750 23:27:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:18.750 23:27:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:18.750 23:27:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:10:18.750 23:27:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.750 23:27:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:18.750 23:27:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.750 23:27:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:19.008 23:27:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:19.008 23:27:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:19.008 23:27:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:19.008 23:27:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:19.008 23:27:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.008 23:27:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:10:19.008 [2024-09-30 23:27:58.626532] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:19.008 23:27:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:19.008 23:27:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=2748b40a-1e47-4e45-82bc-04a5838e8bb1 00:10:19.008 23:27:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 2748b40a-1e47-4e45-82bc-04a5838e8bb1 ']' 00:10:19.008 23:27:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:19.008 23:27:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:19.008 23:27:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.008 [2024-09-30 23:27:58.674141] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:19.008 [2024-09-30 23:27:58.674176] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:19.008 [2024-09-30 23:27:58.674246] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:19.008 [2024-09-30 23:27:58.674316] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:19.008 [2024-09-30 23:27:58.674329] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:10:19.008 23:27:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:19.008 23:27:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:19.008 23:27:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:19.008 23:27:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.008 23:27:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:10:19.008 23:27:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:19.008 23:27:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:10:19.008 23:27:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:10:19.008 23:27:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:19.008 23:27:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:10:19.008 23:27:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:19.008 23:27:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.008 23:27:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:19.008 23:27:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:19.008 23:27:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:10:19.008 23:27:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:19.008 23:27:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.008 23:27:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:19.008 23:27:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:19.008 23:27:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:10:19.008 23:27:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:19.008 23:27:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.008 23:27:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:19.008 23:27:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:19.008 23:27:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:10:19.008 23:27:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:19.008 23:27:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.008 23:27:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:19.008 23:27:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:10:19.008 23:27:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:10:19.008 23:27:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:19.008 23:27:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.008 23:27:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:19.008 23:27:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:10:19.008 23:27:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:10:19.008 23:27:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:10:19.008 23:27:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:10:19.008 23:27:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:10:19.008 23:27:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:19.008 23:27:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:10:19.008 23:27:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:19.008 23:27:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:10:19.008 23:27:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:19.008 23:27:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.008 [2024-09-30 23:27:58.841927] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:10:19.008 [2024-09-30 23:27:58.843795] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:10:19.008 [2024-09-30 23:27:58.843898] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:10:19.008 [2024-09-30 23:27:58.843946] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:10:19.008 [2024-09-30 23:27:58.844023] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:10:19.008 [2024-09-30 23:27:58.844114] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:10:19.008 [2024-09-30 23:27:58.844167] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:10:19.008 [2024-09-30 23:27:58.844218] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:10:19.008 [2024-09-30 23:27:58.844265] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:19.008 [2024-09-30 23:27:58.844276] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state configuring 00:10:19.008 request: 00:10:19.008 { 00:10:19.008 "name": "raid_bdev1", 00:10:19.008 "raid_level": "concat", 00:10:19.008 "base_bdevs": [ 00:10:19.008 "malloc1", 00:10:19.008 "malloc2", 00:10:19.008 "malloc3", 00:10:19.008 "malloc4" 00:10:19.008 ], 00:10:19.008 "strip_size_kb": 64, 00:10:19.008 "superblock": false, 00:10:19.008 "method": "bdev_raid_create", 00:10:19.008 "req_id": 1 00:10:19.008 } 00:10:19.008 Got JSON-RPC error response 00:10:19.008 response: 00:10:19.008 { 00:10:19.008 "code": -17, 00:10:19.008 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:10:19.008 } 00:10:19.008 23:27:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:10:19.008 23:27:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:10:19.008 23:27:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:10:19.008 23:27:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:10:19.008 23:27:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:10:19.008 23:27:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:19.008 23:27:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:19.008 23:27:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.008 23:27:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:10:19.266 23:27:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:19.267 23:27:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:10:19.267 23:27:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:10:19.267 23:27:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:19.267 23:27:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:19.267 23:27:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.267 [2024-09-30 23:27:58.905755] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:19.267 [2024-09-30 23:27:58.905838] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:19.267 [2024-09-30 23:27:58.905883] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:10:19.267 [2024-09-30 23:27:58.905910] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:19.267 [2024-09-30 23:27:58.908010] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:19.267 [2024-09-30 23:27:58.908082] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:19.267 [2024-09-30 23:27:58.908166] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:10:19.267 [2024-09-30 23:27:58.908229] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:19.267 pt1 00:10:19.267 23:27:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:19.267 23:27:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:10:19.267 23:27:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:19.267 23:27:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:19.267 23:27:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:19.267 23:27:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:19.267 23:27:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:19.267 23:27:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:19.267 23:27:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:19.267 23:27:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:19.267 23:27:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:19.267 23:27:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:19.267 23:27:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:19.267 23:27:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:19.267 23:27:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.267 23:27:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:19.267 23:27:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:19.267 "name": "raid_bdev1", 00:10:19.267 "uuid": "2748b40a-1e47-4e45-82bc-04a5838e8bb1", 00:10:19.267 "strip_size_kb": 64, 00:10:19.267 "state": "configuring", 00:10:19.267 "raid_level": "concat", 00:10:19.267 "superblock": true, 00:10:19.267 "num_base_bdevs": 4, 00:10:19.267 "num_base_bdevs_discovered": 1, 00:10:19.267 "num_base_bdevs_operational": 4, 00:10:19.267 "base_bdevs_list": [ 00:10:19.267 { 00:10:19.267 "name": "pt1", 00:10:19.267 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:19.267 "is_configured": true, 00:10:19.267 "data_offset": 2048, 00:10:19.267 "data_size": 63488 00:10:19.267 }, 00:10:19.267 { 00:10:19.267 "name": null, 00:10:19.267 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:19.267 "is_configured": false, 00:10:19.267 "data_offset": 2048, 00:10:19.267 "data_size": 63488 00:10:19.267 }, 00:10:19.267 { 00:10:19.267 "name": null, 00:10:19.267 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:19.267 "is_configured": false, 00:10:19.267 "data_offset": 2048, 00:10:19.267 "data_size": 63488 00:10:19.267 }, 00:10:19.267 { 00:10:19.267 "name": null, 00:10:19.267 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:19.267 "is_configured": false, 00:10:19.267 "data_offset": 2048, 00:10:19.267 "data_size": 63488 00:10:19.267 } 00:10:19.267 ] 00:10:19.267 }' 00:10:19.267 23:27:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:19.267 23:27:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.527 23:27:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:10:19.527 23:27:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:19.527 23:27:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:19.527 23:27:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.527 [2024-09-30 23:27:59.297112] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:19.527 [2024-09-30 23:27:59.297171] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:19.527 [2024-09-30 23:27:59.297192] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:10:19.527 [2024-09-30 23:27:59.297201] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:19.527 [2024-09-30 23:27:59.297591] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:19.527 [2024-09-30 23:27:59.297608] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:19.527 [2024-09-30 23:27:59.297684] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:19.527 [2024-09-30 23:27:59.297705] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:19.527 pt2 00:10:19.527 23:27:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:19.527 23:27:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:10:19.527 23:27:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:19.527 23:27:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.527 [2024-09-30 23:27:59.305101] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:10:19.527 23:27:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:19.527 23:27:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:10:19.527 23:27:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:19.527 23:27:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:19.527 23:27:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:19.527 23:27:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:19.527 23:27:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:19.527 23:27:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:19.527 23:27:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:19.527 23:27:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:19.527 23:27:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:19.527 23:27:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:19.527 23:27:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:19.527 23:27:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:19.527 23:27:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.527 23:27:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:19.527 23:27:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:19.527 "name": "raid_bdev1", 00:10:19.527 "uuid": "2748b40a-1e47-4e45-82bc-04a5838e8bb1", 00:10:19.527 "strip_size_kb": 64, 00:10:19.527 "state": "configuring", 00:10:19.527 "raid_level": "concat", 00:10:19.527 "superblock": true, 00:10:19.527 "num_base_bdevs": 4, 00:10:19.527 "num_base_bdevs_discovered": 1, 00:10:19.527 "num_base_bdevs_operational": 4, 00:10:19.527 "base_bdevs_list": [ 00:10:19.527 { 00:10:19.527 "name": "pt1", 00:10:19.527 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:19.527 "is_configured": true, 00:10:19.527 "data_offset": 2048, 00:10:19.527 "data_size": 63488 00:10:19.527 }, 00:10:19.527 { 00:10:19.527 "name": null, 00:10:19.527 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:19.527 "is_configured": false, 00:10:19.527 "data_offset": 0, 00:10:19.527 "data_size": 63488 00:10:19.527 }, 00:10:19.527 { 00:10:19.527 "name": null, 00:10:19.527 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:19.527 "is_configured": false, 00:10:19.527 "data_offset": 2048, 00:10:19.527 "data_size": 63488 00:10:19.527 }, 00:10:19.527 { 00:10:19.527 "name": null, 00:10:19.527 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:19.527 "is_configured": false, 00:10:19.527 "data_offset": 2048, 00:10:19.527 "data_size": 63488 00:10:19.527 } 00:10:19.527 ] 00:10:19.527 }' 00:10:19.527 23:27:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:19.528 23:27:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.094 23:27:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:10:20.094 23:27:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:20.094 23:27:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:20.094 23:27:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:20.094 23:27:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.094 [2024-09-30 23:27:59.772316] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:20.094 [2024-09-30 23:27:59.772438] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:20.094 [2024-09-30 23:27:59.772471] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:10:20.094 [2024-09-30 23:27:59.772500] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:20.094 [2024-09-30 23:27:59.772933] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:20.094 [2024-09-30 23:27:59.772993] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:20.094 [2024-09-30 23:27:59.773100] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:20.094 [2024-09-30 23:27:59.773152] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:20.094 pt2 00:10:20.094 23:27:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:20.094 23:27:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:20.094 23:27:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:20.094 23:27:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:20.094 23:27:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:20.094 23:27:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.094 [2024-09-30 23:27:59.784241] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:20.094 [2024-09-30 23:27:59.784295] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:20.094 [2024-09-30 23:27:59.784311] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:10:20.094 [2024-09-30 23:27:59.784320] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:20.094 [2024-09-30 23:27:59.784621] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:20.094 [2024-09-30 23:27:59.784639] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:20.094 [2024-09-30 23:27:59.784692] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:10:20.094 [2024-09-30 23:27:59.784711] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:20.094 pt3 00:10:20.094 23:27:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:20.094 23:27:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:20.094 23:27:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:20.094 23:27:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:10:20.094 23:27:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:20.094 23:27:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.094 [2024-09-30 23:27:59.796228] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:10:20.094 [2024-09-30 23:27:59.796279] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:20.094 [2024-09-30 23:27:59.796293] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:10:20.094 [2024-09-30 23:27:59.796302] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:20.094 [2024-09-30 23:27:59.796577] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:20.094 [2024-09-30 23:27:59.796594] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:10:20.094 [2024-09-30 23:27:59.796642] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:10:20.094 [2024-09-30 23:27:59.796661] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:10:20.094 [2024-09-30 23:27:59.796760] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:10:20.094 [2024-09-30 23:27:59.796773] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:20.094 [2024-09-30 23:27:59.797016] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:10:20.094 [2024-09-30 23:27:59.797132] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:10:20.094 [2024-09-30 23:27:59.797141] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006980 00:10:20.094 [2024-09-30 23:27:59.797239] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:20.094 pt4 00:10:20.094 23:27:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:20.094 23:27:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:20.094 23:27:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:20.094 23:27:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:10:20.094 23:27:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:20.094 23:27:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:20.094 23:27:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:20.094 23:27:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:20.094 23:27:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:20.094 23:27:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:20.094 23:27:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:20.094 23:27:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:20.094 23:27:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:20.094 23:27:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:20.094 23:27:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:20.094 23:27:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:20.094 23:27:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.094 23:27:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:20.094 23:27:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:20.094 "name": "raid_bdev1", 00:10:20.094 "uuid": "2748b40a-1e47-4e45-82bc-04a5838e8bb1", 00:10:20.094 "strip_size_kb": 64, 00:10:20.094 "state": "online", 00:10:20.094 "raid_level": "concat", 00:10:20.094 "superblock": true, 00:10:20.094 "num_base_bdevs": 4, 00:10:20.094 "num_base_bdevs_discovered": 4, 00:10:20.094 "num_base_bdevs_operational": 4, 00:10:20.094 "base_bdevs_list": [ 00:10:20.094 { 00:10:20.094 "name": "pt1", 00:10:20.094 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:20.094 "is_configured": true, 00:10:20.094 "data_offset": 2048, 00:10:20.094 "data_size": 63488 00:10:20.094 }, 00:10:20.094 { 00:10:20.094 "name": "pt2", 00:10:20.094 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:20.094 "is_configured": true, 00:10:20.094 "data_offset": 2048, 00:10:20.094 "data_size": 63488 00:10:20.094 }, 00:10:20.094 { 00:10:20.094 "name": "pt3", 00:10:20.094 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:20.094 "is_configured": true, 00:10:20.094 "data_offset": 2048, 00:10:20.094 "data_size": 63488 00:10:20.094 }, 00:10:20.094 { 00:10:20.094 "name": "pt4", 00:10:20.094 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:20.094 "is_configured": true, 00:10:20.094 "data_offset": 2048, 00:10:20.094 "data_size": 63488 00:10:20.094 } 00:10:20.094 ] 00:10:20.094 }' 00:10:20.094 23:27:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:20.094 23:27:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.660 23:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:10:20.660 23:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:10:20.660 23:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:20.660 23:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:20.660 23:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:20.660 23:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:20.660 23:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:20.660 23:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:20.660 23:28:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:20.660 23:28:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.660 [2024-09-30 23:28:00.223877] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:20.660 23:28:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:20.660 23:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:20.660 "name": "raid_bdev1", 00:10:20.660 "aliases": [ 00:10:20.660 "2748b40a-1e47-4e45-82bc-04a5838e8bb1" 00:10:20.660 ], 00:10:20.660 "product_name": "Raid Volume", 00:10:20.660 "block_size": 512, 00:10:20.660 "num_blocks": 253952, 00:10:20.660 "uuid": "2748b40a-1e47-4e45-82bc-04a5838e8bb1", 00:10:20.660 "assigned_rate_limits": { 00:10:20.660 "rw_ios_per_sec": 0, 00:10:20.660 "rw_mbytes_per_sec": 0, 00:10:20.660 "r_mbytes_per_sec": 0, 00:10:20.660 "w_mbytes_per_sec": 0 00:10:20.660 }, 00:10:20.660 "claimed": false, 00:10:20.660 "zoned": false, 00:10:20.660 "supported_io_types": { 00:10:20.660 "read": true, 00:10:20.660 "write": true, 00:10:20.660 "unmap": true, 00:10:20.660 "flush": true, 00:10:20.660 "reset": true, 00:10:20.660 "nvme_admin": false, 00:10:20.660 "nvme_io": false, 00:10:20.660 "nvme_io_md": false, 00:10:20.660 "write_zeroes": true, 00:10:20.660 "zcopy": false, 00:10:20.660 "get_zone_info": false, 00:10:20.660 "zone_management": false, 00:10:20.660 "zone_append": false, 00:10:20.660 "compare": false, 00:10:20.660 "compare_and_write": false, 00:10:20.660 "abort": false, 00:10:20.660 "seek_hole": false, 00:10:20.660 "seek_data": false, 00:10:20.660 "copy": false, 00:10:20.660 "nvme_iov_md": false 00:10:20.660 }, 00:10:20.660 "memory_domains": [ 00:10:20.660 { 00:10:20.660 "dma_device_id": "system", 00:10:20.660 "dma_device_type": 1 00:10:20.660 }, 00:10:20.660 { 00:10:20.660 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:20.660 "dma_device_type": 2 00:10:20.660 }, 00:10:20.660 { 00:10:20.660 "dma_device_id": "system", 00:10:20.660 "dma_device_type": 1 00:10:20.660 }, 00:10:20.660 { 00:10:20.660 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:20.660 "dma_device_type": 2 00:10:20.660 }, 00:10:20.660 { 00:10:20.660 "dma_device_id": "system", 00:10:20.660 "dma_device_type": 1 00:10:20.660 }, 00:10:20.660 { 00:10:20.660 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:20.660 "dma_device_type": 2 00:10:20.660 }, 00:10:20.660 { 00:10:20.660 "dma_device_id": "system", 00:10:20.660 "dma_device_type": 1 00:10:20.660 }, 00:10:20.660 { 00:10:20.660 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:20.660 "dma_device_type": 2 00:10:20.660 } 00:10:20.660 ], 00:10:20.660 "driver_specific": { 00:10:20.660 "raid": { 00:10:20.660 "uuid": "2748b40a-1e47-4e45-82bc-04a5838e8bb1", 00:10:20.660 "strip_size_kb": 64, 00:10:20.660 "state": "online", 00:10:20.660 "raid_level": "concat", 00:10:20.660 "superblock": true, 00:10:20.660 "num_base_bdevs": 4, 00:10:20.660 "num_base_bdevs_discovered": 4, 00:10:20.660 "num_base_bdevs_operational": 4, 00:10:20.660 "base_bdevs_list": [ 00:10:20.660 { 00:10:20.661 "name": "pt1", 00:10:20.661 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:20.661 "is_configured": true, 00:10:20.661 "data_offset": 2048, 00:10:20.661 "data_size": 63488 00:10:20.661 }, 00:10:20.661 { 00:10:20.661 "name": "pt2", 00:10:20.661 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:20.661 "is_configured": true, 00:10:20.661 "data_offset": 2048, 00:10:20.661 "data_size": 63488 00:10:20.661 }, 00:10:20.661 { 00:10:20.661 "name": "pt3", 00:10:20.661 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:20.661 "is_configured": true, 00:10:20.661 "data_offset": 2048, 00:10:20.661 "data_size": 63488 00:10:20.661 }, 00:10:20.661 { 00:10:20.661 "name": "pt4", 00:10:20.661 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:20.661 "is_configured": true, 00:10:20.661 "data_offset": 2048, 00:10:20.661 "data_size": 63488 00:10:20.661 } 00:10:20.661 ] 00:10:20.661 } 00:10:20.661 } 00:10:20.661 }' 00:10:20.661 23:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:20.661 23:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:10:20.661 pt2 00:10:20.661 pt3 00:10:20.661 pt4' 00:10:20.661 23:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:20.661 23:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:20.661 23:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:20.661 23:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:20.661 23:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:10:20.661 23:28:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:20.661 23:28:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.661 23:28:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:20.661 23:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:20.661 23:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:20.661 23:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:20.661 23:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:10:20.661 23:28:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:20.661 23:28:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.661 23:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:20.661 23:28:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:20.661 23:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:20.661 23:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:20.661 23:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:20.661 23:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:10:20.661 23:28:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:20.661 23:28:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.661 23:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:20.661 23:28:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:20.661 23:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:20.661 23:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:20.661 23:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:20.661 23:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:10:20.661 23:28:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:20.661 23:28:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.661 23:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:20.661 23:28:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:20.919 23:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:20.919 23:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:20.919 23:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:20.919 23:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:10:20.919 23:28:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:20.919 23:28:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.919 [2024-09-30 23:28:00.547276] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:20.919 23:28:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:20.919 23:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 2748b40a-1e47-4e45-82bc-04a5838e8bb1 '!=' 2748b40a-1e47-4e45-82bc-04a5838e8bb1 ']' 00:10:20.919 23:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:10:20.919 23:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:20.920 23:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:20.920 23:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 83506 00:10:20.920 23:28:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 83506 ']' 00:10:20.920 23:28:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 83506 00:10:20.920 23:28:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:10:20.920 23:28:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:20.920 23:28:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 83506 00:10:20.920 killing process with pid 83506 00:10:20.920 23:28:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:20.920 23:28:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:20.920 23:28:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 83506' 00:10:20.920 23:28:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 83506 00:10:20.920 [2024-09-30 23:28:00.631497] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:20.920 [2024-09-30 23:28:00.631589] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:20.920 [2024-09-30 23:28:00.631659] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:20.920 [2024-09-30 23:28:00.631670] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name raid_bdev1, state offline 00:10:20.920 23:28:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 83506 00:10:20.920 [2024-09-30 23:28:00.675216] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:21.178 23:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:10:21.178 00:10:21.178 real 0m4.145s 00:10:21.178 user 0m6.510s 00:10:21.178 sys 0m0.924s 00:10:21.178 23:28:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:21.178 23:28:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.178 ************************************ 00:10:21.178 END TEST raid_superblock_test 00:10:21.178 ************************************ 00:10:21.178 23:28:00 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 4 read 00:10:21.178 23:28:00 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:10:21.178 23:28:00 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:21.178 23:28:00 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:21.178 ************************************ 00:10:21.178 START TEST raid_read_error_test 00:10:21.178 ************************************ 00:10:21.178 23:28:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test concat 4 read 00:10:21.178 23:28:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:10:21.178 23:28:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:10:21.178 23:28:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:10:21.178 23:28:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:10:21.178 23:28:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:21.178 23:28:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:10:21.178 23:28:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:21.178 23:28:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:21.178 23:28:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:10:21.178 23:28:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:21.178 23:28:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:21.178 23:28:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:10:21.178 23:28:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:21.178 23:28:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:21.178 23:28:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:10:21.178 23:28:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:21.178 23:28:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:21.178 23:28:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:10:21.178 23:28:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:10:21.178 23:28:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:10:21.178 23:28:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:10:21.178 23:28:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:10:21.178 23:28:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:10:21.178 23:28:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:10:21.178 23:28:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:10:21.178 23:28:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:10:21.178 23:28:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:10:21.178 23:28:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:10:21.178 23:28:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.aRDr7W4e1j 00:10:21.178 23:28:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=83754 00:10:21.178 23:28:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:21.178 23:28:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 83754 00:10:21.178 23:28:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 83754 ']' 00:10:21.178 23:28:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:21.178 23:28:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:21.178 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:21.178 23:28:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:21.178 23:28:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:21.178 23:28:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.436 [2024-09-30 23:28:01.102639] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:10:21.436 [2024-09-30 23:28:01.102765] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83754 ] 00:10:21.436 [2024-09-30 23:28:01.263727] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:21.695 [2024-09-30 23:28:01.308403] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:21.695 [2024-09-30 23:28:01.350332] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:21.695 [2024-09-30 23:28:01.350365] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:22.286 23:28:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:22.286 23:28:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:10:22.286 23:28:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:22.286 23:28:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:22.286 23:28:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.286 23:28:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.286 BaseBdev1_malloc 00:10:22.286 23:28:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.286 23:28:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:10:22.286 23:28:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.286 23:28:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.286 true 00:10:22.286 23:28:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.286 23:28:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:22.286 23:28:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.286 23:28:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.286 [2024-09-30 23:28:01.952321] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:22.286 [2024-09-30 23:28:01.952393] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:22.286 [2024-09-30 23:28:01.952412] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:10:22.286 [2024-09-30 23:28:01.952427] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:22.286 [2024-09-30 23:28:01.954538] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:22.286 [2024-09-30 23:28:01.954576] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:22.286 BaseBdev1 00:10:22.286 23:28:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.286 23:28:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:22.286 23:28:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:22.286 23:28:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.286 23:28:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.286 BaseBdev2_malloc 00:10:22.286 23:28:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.286 23:28:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:10:22.286 23:28:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.286 23:28:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.286 true 00:10:22.286 23:28:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.286 23:28:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:22.286 23:28:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.286 23:28:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.286 [2024-09-30 23:28:02.009264] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:22.286 [2024-09-30 23:28:02.009337] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:22.286 [2024-09-30 23:28:02.009366] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:10:22.286 [2024-09-30 23:28:02.009381] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:22.286 [2024-09-30 23:28:02.012581] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:22.286 [2024-09-30 23:28:02.012635] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:22.286 BaseBdev2 00:10:22.286 23:28:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.286 23:28:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:22.286 23:28:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:10:22.286 23:28:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.286 23:28:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.286 BaseBdev3_malloc 00:10:22.286 23:28:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.286 23:28:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:10:22.286 23:28:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.286 23:28:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.286 true 00:10:22.286 23:28:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.286 23:28:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:10:22.286 23:28:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.286 23:28:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.286 [2024-09-30 23:28:02.049957] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:10:22.286 [2024-09-30 23:28:02.050018] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:22.286 [2024-09-30 23:28:02.050035] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:10:22.286 [2024-09-30 23:28:02.050044] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:22.286 [2024-09-30 23:28:02.052018] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:22.286 [2024-09-30 23:28:02.052056] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:10:22.286 BaseBdev3 00:10:22.286 23:28:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.286 23:28:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:22.286 23:28:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:10:22.286 23:28:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.286 23:28:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.286 BaseBdev4_malloc 00:10:22.286 23:28:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.286 23:28:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:10:22.287 23:28:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.287 23:28:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.287 true 00:10:22.287 23:28:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.287 23:28:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:10:22.287 23:28:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.287 23:28:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.287 [2024-09-30 23:28:02.090346] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:10:22.287 [2024-09-30 23:28:02.090407] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:22.287 [2024-09-30 23:28:02.090427] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:10:22.287 [2024-09-30 23:28:02.090436] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:22.287 [2024-09-30 23:28:02.092412] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:22.287 [2024-09-30 23:28:02.092450] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:10:22.287 BaseBdev4 00:10:22.287 23:28:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.287 23:28:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:10:22.287 23:28:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.287 23:28:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.287 [2024-09-30 23:28:02.102382] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:22.287 [2024-09-30 23:28:02.104192] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:22.287 [2024-09-30 23:28:02.104280] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:22.287 [2024-09-30 23:28:02.104332] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:22.287 [2024-09-30 23:28:02.104534] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007080 00:10:22.287 [2024-09-30 23:28:02.104553] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:22.287 [2024-09-30 23:28:02.104781] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:10:22.287 [2024-09-30 23:28:02.104942] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007080 00:10:22.287 [2024-09-30 23:28:02.104960] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007080 00:10:22.287 [2024-09-30 23:28:02.105081] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:22.287 23:28:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.287 23:28:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:10:22.287 23:28:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:22.287 23:28:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:22.287 23:28:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:22.287 23:28:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:22.287 23:28:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:22.287 23:28:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:22.287 23:28:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:22.287 23:28:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:22.287 23:28:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:22.287 23:28:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:22.287 23:28:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:22.287 23:28:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.287 23:28:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.287 23:28:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.546 23:28:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:22.546 "name": "raid_bdev1", 00:10:22.546 "uuid": "2c971ea1-4b95-4567-a98b-23c5d3105f37", 00:10:22.546 "strip_size_kb": 64, 00:10:22.546 "state": "online", 00:10:22.546 "raid_level": "concat", 00:10:22.546 "superblock": true, 00:10:22.546 "num_base_bdevs": 4, 00:10:22.546 "num_base_bdevs_discovered": 4, 00:10:22.546 "num_base_bdevs_operational": 4, 00:10:22.546 "base_bdevs_list": [ 00:10:22.546 { 00:10:22.546 "name": "BaseBdev1", 00:10:22.546 "uuid": "74331b9d-073b-587c-a6ab-4d9c77ee9a3c", 00:10:22.546 "is_configured": true, 00:10:22.546 "data_offset": 2048, 00:10:22.546 "data_size": 63488 00:10:22.546 }, 00:10:22.546 { 00:10:22.546 "name": "BaseBdev2", 00:10:22.546 "uuid": "f60c3555-a57f-588e-9131-e67bd2dc50dd", 00:10:22.546 "is_configured": true, 00:10:22.546 "data_offset": 2048, 00:10:22.546 "data_size": 63488 00:10:22.546 }, 00:10:22.546 { 00:10:22.546 "name": "BaseBdev3", 00:10:22.546 "uuid": "7e669f3e-3c96-5670-ab85-1c0ae680b620", 00:10:22.546 "is_configured": true, 00:10:22.546 "data_offset": 2048, 00:10:22.546 "data_size": 63488 00:10:22.546 }, 00:10:22.546 { 00:10:22.546 "name": "BaseBdev4", 00:10:22.546 "uuid": "fd396e8e-c844-5198-85bc-ef7a23e00818", 00:10:22.546 "is_configured": true, 00:10:22.546 "data_offset": 2048, 00:10:22.546 "data_size": 63488 00:10:22.546 } 00:10:22.546 ] 00:10:22.546 }' 00:10:22.546 23:28:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:22.546 23:28:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.805 23:28:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:10:22.805 23:28:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:10:23.064 [2024-09-30 23:28:02.677799] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:10:24.002 23:28:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:10:24.002 23:28:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:24.002 23:28:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.002 23:28:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:24.002 23:28:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:10:24.002 23:28:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:10:24.002 23:28:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:10:24.002 23:28:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:10:24.002 23:28:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:24.002 23:28:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:24.002 23:28:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:24.002 23:28:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:24.002 23:28:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:24.002 23:28:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:24.002 23:28:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:24.002 23:28:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:24.002 23:28:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:24.002 23:28:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:24.002 23:28:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:24.002 23:28:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:24.002 23:28:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.002 23:28:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:24.002 23:28:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:24.002 "name": "raid_bdev1", 00:10:24.002 "uuid": "2c971ea1-4b95-4567-a98b-23c5d3105f37", 00:10:24.002 "strip_size_kb": 64, 00:10:24.002 "state": "online", 00:10:24.002 "raid_level": "concat", 00:10:24.002 "superblock": true, 00:10:24.002 "num_base_bdevs": 4, 00:10:24.002 "num_base_bdevs_discovered": 4, 00:10:24.002 "num_base_bdevs_operational": 4, 00:10:24.002 "base_bdevs_list": [ 00:10:24.002 { 00:10:24.002 "name": "BaseBdev1", 00:10:24.002 "uuid": "74331b9d-073b-587c-a6ab-4d9c77ee9a3c", 00:10:24.002 "is_configured": true, 00:10:24.002 "data_offset": 2048, 00:10:24.002 "data_size": 63488 00:10:24.002 }, 00:10:24.002 { 00:10:24.002 "name": "BaseBdev2", 00:10:24.002 "uuid": "f60c3555-a57f-588e-9131-e67bd2dc50dd", 00:10:24.002 "is_configured": true, 00:10:24.002 "data_offset": 2048, 00:10:24.002 "data_size": 63488 00:10:24.002 }, 00:10:24.002 { 00:10:24.002 "name": "BaseBdev3", 00:10:24.002 "uuid": "7e669f3e-3c96-5670-ab85-1c0ae680b620", 00:10:24.002 "is_configured": true, 00:10:24.002 "data_offset": 2048, 00:10:24.002 "data_size": 63488 00:10:24.002 }, 00:10:24.002 { 00:10:24.002 "name": "BaseBdev4", 00:10:24.002 "uuid": "fd396e8e-c844-5198-85bc-ef7a23e00818", 00:10:24.002 "is_configured": true, 00:10:24.002 "data_offset": 2048, 00:10:24.002 "data_size": 63488 00:10:24.002 } 00:10:24.002 ] 00:10:24.002 }' 00:10:24.002 23:28:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:24.002 23:28:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.261 23:28:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:24.261 23:28:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:24.261 23:28:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.261 [2024-09-30 23:28:04.093281] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:24.261 [2024-09-30 23:28:04.093319] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:24.261 [2024-09-30 23:28:04.095818] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:24.261 [2024-09-30 23:28:04.095886] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:24.261 [2024-09-30 23:28:04.095935] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:24.261 [2024-09-30 23:28:04.095944] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007080 name raid_bdev1, state offline 00:10:24.261 { 00:10:24.261 "results": [ 00:10:24.261 { 00:10:24.261 "job": "raid_bdev1", 00:10:24.261 "core_mask": "0x1", 00:10:24.261 "workload": "randrw", 00:10:24.261 "percentage": 50, 00:10:24.261 "status": "finished", 00:10:24.261 "queue_depth": 1, 00:10:24.261 "io_size": 131072, 00:10:24.261 "runtime": 1.416425, 00:10:24.261 "iops": 17078.207458919464, 00:10:24.261 "mibps": 2134.775932364933, 00:10:24.261 "io_failed": 1, 00:10:24.261 "io_timeout": 0, 00:10:24.261 "avg_latency_us": 81.25791168139871, 00:10:24.261 "min_latency_us": 24.593886462882097, 00:10:24.261 "max_latency_us": 1395.1441048034935 00:10:24.261 } 00:10:24.261 ], 00:10:24.261 "core_count": 1 00:10:24.261 } 00:10:24.261 23:28:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:24.261 23:28:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 83754 00:10:24.261 23:28:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 83754 ']' 00:10:24.261 23:28:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 83754 00:10:24.261 23:28:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:10:24.261 23:28:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:24.261 23:28:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 83754 00:10:24.520 23:28:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:24.520 23:28:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:24.520 killing process with pid 83754 00:10:24.520 23:28:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 83754' 00:10:24.520 23:28:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 83754 00:10:24.520 [2024-09-30 23:28:04.133516] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:24.520 23:28:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 83754 00:10:24.520 [2024-09-30 23:28:04.168824] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:24.778 23:28:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.aRDr7W4e1j 00:10:24.778 23:28:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:10:24.778 23:28:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:10:24.778 23:28:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.71 00:10:24.778 23:28:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:10:24.778 23:28:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:24.778 23:28:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:24.778 23:28:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.71 != \0\.\0\0 ]] 00:10:24.778 00:10:24.778 real 0m3.422s 00:10:24.778 user 0m4.311s 00:10:24.778 sys 0m0.575s 00:10:24.778 23:28:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:24.778 23:28:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.778 ************************************ 00:10:24.778 END TEST raid_read_error_test 00:10:24.778 ************************************ 00:10:24.778 23:28:04 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 4 write 00:10:24.778 23:28:04 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:10:24.778 23:28:04 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:24.778 23:28:04 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:24.778 ************************************ 00:10:24.778 START TEST raid_write_error_test 00:10:24.778 ************************************ 00:10:24.778 23:28:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test concat 4 write 00:10:24.778 23:28:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:10:24.778 23:28:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:10:24.778 23:28:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:10:24.778 23:28:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:10:24.778 23:28:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:24.778 23:28:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:10:24.778 23:28:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:24.778 23:28:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:24.778 23:28:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:10:24.778 23:28:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:24.778 23:28:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:24.778 23:28:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:10:24.778 23:28:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:24.778 23:28:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:24.778 23:28:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:10:24.778 23:28:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:24.778 23:28:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:24.778 23:28:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:10:24.778 23:28:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:10:24.778 23:28:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:10:24.778 23:28:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:10:24.778 23:28:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:10:24.778 23:28:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:10:24.778 23:28:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:10:24.778 23:28:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:10:24.778 23:28:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:10:24.778 23:28:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:10:24.778 23:28:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:10:24.778 23:28:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.JgOB2Bhskz 00:10:24.778 23:28:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=83889 00:10:24.778 23:28:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:24.778 23:28:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 83889 00:10:24.778 23:28:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 83889 ']' 00:10:24.778 23:28:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:24.778 23:28:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:24.778 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:24.778 23:28:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:24.778 23:28:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:24.778 23:28:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.778 [2024-09-30 23:28:04.606631] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:10:24.778 [2024-09-30 23:28:04.607320] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83889 ] 00:10:25.036 [2024-09-30 23:28:04.772103] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:25.036 [2024-09-30 23:28:04.816214] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:25.036 [2024-09-30 23:28:04.858009] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:25.036 [2024-09-30 23:28:04.858053] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:25.603 23:28:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:25.603 23:28:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:10:25.603 23:28:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:25.603 23:28:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:25.603 23:28:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:25.603 23:28:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.603 BaseBdev1_malloc 00:10:25.603 23:28:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:25.603 23:28:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:10:25.603 23:28:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:25.603 23:28:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.603 true 00:10:25.603 23:28:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:25.603 23:28:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:25.603 23:28:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:25.603 23:28:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.603 [2024-09-30 23:28:05.455857] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:25.603 [2024-09-30 23:28:05.455967] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:25.603 [2024-09-30 23:28:05.455990] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:10:25.603 [2024-09-30 23:28:05.456006] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:25.862 [2024-09-30 23:28:05.458120] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:25.862 [2024-09-30 23:28:05.458156] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:25.862 BaseBdev1 00:10:25.862 23:28:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:25.862 23:28:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:25.862 23:28:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:25.862 23:28:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:25.862 23:28:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.862 BaseBdev2_malloc 00:10:25.862 23:28:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:25.862 23:28:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:10:25.862 23:28:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:25.862 23:28:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.862 true 00:10:25.862 23:28:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:25.862 23:28:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:25.862 23:28:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:25.862 23:28:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.862 [2024-09-30 23:28:05.503572] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:25.862 [2024-09-30 23:28:05.503628] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:25.862 [2024-09-30 23:28:05.503645] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:10:25.862 [2024-09-30 23:28:05.503654] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:25.862 [2024-09-30 23:28:05.505637] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:25.862 [2024-09-30 23:28:05.505672] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:25.862 BaseBdev2 00:10:25.862 23:28:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:25.862 23:28:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:25.862 23:28:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:10:25.862 23:28:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:25.862 23:28:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.862 BaseBdev3_malloc 00:10:25.862 23:28:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:25.862 23:28:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:10:25.862 23:28:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:25.862 23:28:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.862 true 00:10:25.862 23:28:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:25.862 23:28:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:10:25.862 23:28:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:25.862 23:28:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.863 [2024-09-30 23:28:05.536043] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:10:25.863 [2024-09-30 23:28:05.536095] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:25.863 [2024-09-30 23:28:05.536112] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:10:25.863 [2024-09-30 23:28:05.536137] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:25.863 [2024-09-30 23:28:05.538126] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:25.863 [2024-09-30 23:28:05.538158] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:10:25.863 BaseBdev3 00:10:25.863 23:28:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:25.863 23:28:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:25.863 23:28:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:10:25.863 23:28:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:25.863 23:28:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.863 BaseBdev4_malloc 00:10:25.863 23:28:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:25.863 23:28:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:10:25.863 23:28:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:25.863 23:28:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.863 true 00:10:25.863 23:28:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:25.863 23:28:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:10:25.863 23:28:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:25.863 23:28:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.863 [2024-09-30 23:28:05.564406] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:10:25.863 [2024-09-30 23:28:05.564454] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:25.863 [2024-09-30 23:28:05.564489] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:10:25.863 [2024-09-30 23:28:05.564498] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:25.863 [2024-09-30 23:28:05.566490] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:25.863 [2024-09-30 23:28:05.566550] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:10:25.863 BaseBdev4 00:10:25.863 23:28:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:25.863 23:28:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:10:25.863 23:28:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:25.863 23:28:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.863 [2024-09-30 23:28:05.572442] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:25.863 [2024-09-30 23:28:05.574260] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:25.863 [2024-09-30 23:28:05.574361] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:25.863 [2024-09-30 23:28:05.574412] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:25.863 [2024-09-30 23:28:05.574600] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007080 00:10:25.863 [2024-09-30 23:28:05.574619] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:25.863 [2024-09-30 23:28:05.574894] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:10:25.863 [2024-09-30 23:28:05.575047] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007080 00:10:25.863 [2024-09-30 23:28:05.575064] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007080 00:10:25.863 [2024-09-30 23:28:05.575186] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:25.863 23:28:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:25.863 23:28:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:10:25.863 23:28:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:25.863 23:28:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:25.863 23:28:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:25.863 23:28:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:25.863 23:28:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:25.863 23:28:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:25.863 23:28:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:25.863 23:28:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:25.863 23:28:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:25.863 23:28:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:25.863 23:28:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:25.863 23:28:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:25.863 23:28:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.863 23:28:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:25.863 23:28:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:25.863 "name": "raid_bdev1", 00:10:25.863 "uuid": "982616f4-4d72-42d4-b669-1c59ae24a196", 00:10:25.863 "strip_size_kb": 64, 00:10:25.863 "state": "online", 00:10:25.863 "raid_level": "concat", 00:10:25.863 "superblock": true, 00:10:25.863 "num_base_bdevs": 4, 00:10:25.863 "num_base_bdevs_discovered": 4, 00:10:25.863 "num_base_bdevs_operational": 4, 00:10:25.863 "base_bdevs_list": [ 00:10:25.863 { 00:10:25.863 "name": "BaseBdev1", 00:10:25.863 "uuid": "3db2f30b-6d3d-52a8-8293-0a7a94b56fc3", 00:10:25.863 "is_configured": true, 00:10:25.863 "data_offset": 2048, 00:10:25.863 "data_size": 63488 00:10:25.863 }, 00:10:25.863 { 00:10:25.863 "name": "BaseBdev2", 00:10:25.863 "uuid": "51c320fe-8430-5384-98fd-c32ec1a5a553", 00:10:25.863 "is_configured": true, 00:10:25.863 "data_offset": 2048, 00:10:25.863 "data_size": 63488 00:10:25.863 }, 00:10:25.863 { 00:10:25.863 "name": "BaseBdev3", 00:10:25.863 "uuid": "b5c4add3-7113-5087-8611-0dba7f690a1d", 00:10:25.863 "is_configured": true, 00:10:25.863 "data_offset": 2048, 00:10:25.863 "data_size": 63488 00:10:25.863 }, 00:10:25.863 { 00:10:25.863 "name": "BaseBdev4", 00:10:25.863 "uuid": "ad6814f0-eec2-52c4-b91c-3a05e76d2366", 00:10:25.863 "is_configured": true, 00:10:25.863 "data_offset": 2048, 00:10:25.863 "data_size": 63488 00:10:25.863 } 00:10:25.863 ] 00:10:25.863 }' 00:10:25.863 23:28:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:25.863 23:28:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.431 23:28:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:10:26.431 23:28:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:10:26.431 [2024-09-30 23:28:06.087892] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:10:27.368 23:28:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:10:27.368 23:28:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:27.368 23:28:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.368 23:28:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:27.368 23:28:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:10:27.368 23:28:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:10:27.368 23:28:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:10:27.368 23:28:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:10:27.368 23:28:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:27.368 23:28:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:27.368 23:28:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:27.368 23:28:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:27.368 23:28:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:27.368 23:28:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:27.368 23:28:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:27.368 23:28:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:27.368 23:28:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:27.368 23:28:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:27.368 23:28:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:27.368 23:28:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:27.368 23:28:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.368 23:28:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:27.368 23:28:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:27.368 "name": "raid_bdev1", 00:10:27.368 "uuid": "982616f4-4d72-42d4-b669-1c59ae24a196", 00:10:27.368 "strip_size_kb": 64, 00:10:27.368 "state": "online", 00:10:27.368 "raid_level": "concat", 00:10:27.368 "superblock": true, 00:10:27.368 "num_base_bdevs": 4, 00:10:27.368 "num_base_bdevs_discovered": 4, 00:10:27.368 "num_base_bdevs_operational": 4, 00:10:27.368 "base_bdevs_list": [ 00:10:27.368 { 00:10:27.368 "name": "BaseBdev1", 00:10:27.368 "uuid": "3db2f30b-6d3d-52a8-8293-0a7a94b56fc3", 00:10:27.368 "is_configured": true, 00:10:27.368 "data_offset": 2048, 00:10:27.368 "data_size": 63488 00:10:27.368 }, 00:10:27.368 { 00:10:27.368 "name": "BaseBdev2", 00:10:27.368 "uuid": "51c320fe-8430-5384-98fd-c32ec1a5a553", 00:10:27.368 "is_configured": true, 00:10:27.368 "data_offset": 2048, 00:10:27.368 "data_size": 63488 00:10:27.368 }, 00:10:27.368 { 00:10:27.368 "name": "BaseBdev3", 00:10:27.368 "uuid": "b5c4add3-7113-5087-8611-0dba7f690a1d", 00:10:27.368 "is_configured": true, 00:10:27.368 "data_offset": 2048, 00:10:27.368 "data_size": 63488 00:10:27.368 }, 00:10:27.368 { 00:10:27.368 "name": "BaseBdev4", 00:10:27.368 "uuid": "ad6814f0-eec2-52c4-b91c-3a05e76d2366", 00:10:27.368 "is_configured": true, 00:10:27.368 "data_offset": 2048, 00:10:27.368 "data_size": 63488 00:10:27.368 } 00:10:27.368 ] 00:10:27.368 }' 00:10:27.368 23:28:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:27.368 23:28:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.627 23:28:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:27.627 23:28:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:27.627 23:28:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.627 [2024-09-30 23:28:07.445012] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:27.627 [2024-09-30 23:28:07.445063] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:27.627 [2024-09-30 23:28:07.447565] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:27.627 [2024-09-30 23:28:07.447629] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:27.627 [2024-09-30 23:28:07.447682] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:27.627 [2024-09-30 23:28:07.447692] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007080 name raid_bdev1, state offline 00:10:27.627 { 00:10:27.627 "results": [ 00:10:27.627 { 00:10:27.627 "job": "raid_bdev1", 00:10:27.627 "core_mask": "0x1", 00:10:27.627 "workload": "randrw", 00:10:27.627 "percentage": 50, 00:10:27.627 "status": "finished", 00:10:27.627 "queue_depth": 1, 00:10:27.627 "io_size": 131072, 00:10:27.627 "runtime": 1.357968, 00:10:27.627 "iops": 16177.111684516867, 00:10:27.627 "mibps": 2022.1389605646084, 00:10:27.627 "io_failed": 1, 00:10:27.627 "io_timeout": 0, 00:10:27.627 "avg_latency_us": 86.19664938745565, 00:10:27.627 "min_latency_us": 24.146724890829695, 00:10:27.627 "max_latency_us": 6095.70655021834 00:10:27.627 } 00:10:27.627 ], 00:10:27.627 "core_count": 1 00:10:27.627 } 00:10:27.627 23:28:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:27.627 23:28:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 83889 00:10:27.627 23:28:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 83889 ']' 00:10:27.627 23:28:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 83889 00:10:27.627 23:28:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:10:27.627 23:28:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:27.627 23:28:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 83889 00:10:27.887 23:28:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:27.887 23:28:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:27.887 killing process with pid 83889 00:10:27.887 23:28:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 83889' 00:10:27.887 23:28:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 83889 00:10:27.887 [2024-09-30 23:28:07.489052] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:27.887 23:28:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 83889 00:10:27.887 [2024-09-30 23:28:07.555892] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:28.146 23:28:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.JgOB2Bhskz 00:10:28.146 23:28:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:10:28.146 23:28:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:10:28.146 23:28:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.74 00:10:28.146 23:28:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:10:28.146 23:28:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:28.146 23:28:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:28.146 23:28:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.74 != \0\.\0\0 ]] 00:10:28.146 00:10:28.146 real 0m3.425s 00:10:28.146 user 0m4.211s 00:10:28.146 sys 0m0.598s 00:10:28.146 23:28:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:28.146 23:28:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.146 ************************************ 00:10:28.146 END TEST raid_write_error_test 00:10:28.146 ************************************ 00:10:28.146 23:28:07 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:10:28.147 23:28:07 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 4 false 00:10:28.147 23:28:07 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:10:28.147 23:28:07 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:28.147 23:28:07 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:28.147 ************************************ 00:10:28.147 START TEST raid_state_function_test 00:10:28.147 ************************************ 00:10:28.147 23:28:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 4 false 00:10:28.147 23:28:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:10:28.147 23:28:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:10:28.147 23:28:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:10:28.147 23:28:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:10:28.147 23:28:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:10:28.147 23:28:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:28.147 23:28:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:10:28.147 23:28:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:28.147 23:28:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:28.147 23:28:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:10:28.147 23:28:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:28.147 23:28:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:28.147 23:28:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:10:28.147 23:28:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:28.147 23:28:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:28.147 23:28:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:10:28.406 23:28:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:28.406 23:28:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:28.406 23:28:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:10:28.406 23:28:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:10:28.406 23:28:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:10:28.406 23:28:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:10:28.406 23:28:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:10:28.406 23:28:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:10:28.406 23:28:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:10:28.406 23:28:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:10:28.406 23:28:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:10:28.406 23:28:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:10:28.406 23:28:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=84020 00:10:28.406 23:28:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:28.406 Process raid pid: 84020 00:10:28.406 23:28:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 84020' 00:10:28.406 23:28:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 84020 00:10:28.406 23:28:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 84020 ']' 00:10:28.406 23:28:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:28.406 23:28:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:28.406 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:28.406 23:28:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:28.406 23:28:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:28.406 23:28:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.406 [2024-09-30 23:28:08.089247] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:10:28.406 [2024-09-30 23:28:08.089403] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:28.406 [2024-09-30 23:28:08.252507] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:28.665 [2024-09-30 23:28:08.296551] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:28.665 [2024-09-30 23:28:08.338969] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:28.665 [2024-09-30 23:28:08.339011] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:29.233 23:28:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:29.233 23:28:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:10:29.233 23:28:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:29.233 23:28:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:29.233 23:28:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.233 [2024-09-30 23:28:08.908560] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:29.233 [2024-09-30 23:28:08.908631] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:29.233 [2024-09-30 23:28:08.908644] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:29.233 [2024-09-30 23:28:08.908654] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:29.233 [2024-09-30 23:28:08.908663] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:29.233 [2024-09-30 23:28:08.908676] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:29.233 [2024-09-30 23:28:08.908682] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:29.233 [2024-09-30 23:28:08.908690] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:29.233 23:28:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:29.233 23:28:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:29.233 23:28:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:29.233 23:28:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:29.233 23:28:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:29.233 23:28:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:29.233 23:28:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:29.233 23:28:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:29.233 23:28:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:29.233 23:28:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:29.233 23:28:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:29.233 23:28:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:29.233 23:28:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:29.233 23:28:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:29.234 23:28:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.234 23:28:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:29.234 23:28:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:29.234 "name": "Existed_Raid", 00:10:29.234 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:29.234 "strip_size_kb": 0, 00:10:29.234 "state": "configuring", 00:10:29.234 "raid_level": "raid1", 00:10:29.234 "superblock": false, 00:10:29.234 "num_base_bdevs": 4, 00:10:29.234 "num_base_bdevs_discovered": 0, 00:10:29.234 "num_base_bdevs_operational": 4, 00:10:29.234 "base_bdevs_list": [ 00:10:29.234 { 00:10:29.234 "name": "BaseBdev1", 00:10:29.234 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:29.234 "is_configured": false, 00:10:29.234 "data_offset": 0, 00:10:29.234 "data_size": 0 00:10:29.234 }, 00:10:29.234 { 00:10:29.234 "name": "BaseBdev2", 00:10:29.234 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:29.234 "is_configured": false, 00:10:29.234 "data_offset": 0, 00:10:29.234 "data_size": 0 00:10:29.234 }, 00:10:29.234 { 00:10:29.234 "name": "BaseBdev3", 00:10:29.234 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:29.234 "is_configured": false, 00:10:29.234 "data_offset": 0, 00:10:29.234 "data_size": 0 00:10:29.234 }, 00:10:29.234 { 00:10:29.234 "name": "BaseBdev4", 00:10:29.234 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:29.234 "is_configured": false, 00:10:29.234 "data_offset": 0, 00:10:29.234 "data_size": 0 00:10:29.234 } 00:10:29.234 ] 00:10:29.234 }' 00:10:29.234 23:28:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:29.234 23:28:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.801 23:28:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:29.801 23:28:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:29.801 23:28:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.801 [2024-09-30 23:28:09.395654] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:29.801 [2024-09-30 23:28:09.395706] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring 00:10:29.801 23:28:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:29.801 23:28:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:29.801 23:28:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:29.801 23:28:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.801 [2024-09-30 23:28:09.403664] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:29.801 [2024-09-30 23:28:09.403711] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:29.801 [2024-09-30 23:28:09.403720] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:29.801 [2024-09-30 23:28:09.403729] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:29.801 [2024-09-30 23:28:09.403735] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:29.801 [2024-09-30 23:28:09.403743] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:29.801 [2024-09-30 23:28:09.403749] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:29.801 [2024-09-30 23:28:09.403757] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:29.801 23:28:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:29.801 23:28:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:29.801 23:28:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:29.801 23:28:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.801 [2024-09-30 23:28:09.420558] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:29.801 BaseBdev1 00:10:29.801 23:28:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:29.801 23:28:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:10:29.802 23:28:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:10:29.802 23:28:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:29.802 23:28:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:29.802 23:28:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:29.802 23:28:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:29.802 23:28:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:29.802 23:28:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:29.802 23:28:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.802 23:28:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:29.802 23:28:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:29.802 23:28:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:29.802 23:28:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.802 [ 00:10:29.802 { 00:10:29.802 "name": "BaseBdev1", 00:10:29.802 "aliases": [ 00:10:29.802 "5f16db77-e953-4922-95c2-1ff0ed987121" 00:10:29.802 ], 00:10:29.802 "product_name": "Malloc disk", 00:10:29.802 "block_size": 512, 00:10:29.802 "num_blocks": 65536, 00:10:29.802 "uuid": "5f16db77-e953-4922-95c2-1ff0ed987121", 00:10:29.802 "assigned_rate_limits": { 00:10:29.802 "rw_ios_per_sec": 0, 00:10:29.802 "rw_mbytes_per_sec": 0, 00:10:29.802 "r_mbytes_per_sec": 0, 00:10:29.802 "w_mbytes_per_sec": 0 00:10:29.802 }, 00:10:29.802 "claimed": true, 00:10:29.802 "claim_type": "exclusive_write", 00:10:29.802 "zoned": false, 00:10:29.802 "supported_io_types": { 00:10:29.802 "read": true, 00:10:29.802 "write": true, 00:10:29.802 "unmap": true, 00:10:29.802 "flush": true, 00:10:29.802 "reset": true, 00:10:29.802 "nvme_admin": false, 00:10:29.802 "nvme_io": false, 00:10:29.802 "nvme_io_md": false, 00:10:29.802 "write_zeroes": true, 00:10:29.802 "zcopy": true, 00:10:29.802 "get_zone_info": false, 00:10:29.802 "zone_management": false, 00:10:29.802 "zone_append": false, 00:10:29.802 "compare": false, 00:10:29.802 "compare_and_write": false, 00:10:29.802 "abort": true, 00:10:29.802 "seek_hole": false, 00:10:29.802 "seek_data": false, 00:10:29.802 "copy": true, 00:10:29.802 "nvme_iov_md": false 00:10:29.802 }, 00:10:29.802 "memory_domains": [ 00:10:29.802 { 00:10:29.802 "dma_device_id": "system", 00:10:29.802 "dma_device_type": 1 00:10:29.802 }, 00:10:29.802 { 00:10:29.802 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:29.802 "dma_device_type": 2 00:10:29.802 } 00:10:29.802 ], 00:10:29.802 "driver_specific": {} 00:10:29.802 } 00:10:29.802 ] 00:10:29.802 23:28:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:29.802 23:28:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:29.802 23:28:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:29.802 23:28:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:29.802 23:28:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:29.802 23:28:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:29.802 23:28:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:29.802 23:28:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:29.802 23:28:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:29.802 23:28:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:29.802 23:28:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:29.802 23:28:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:29.802 23:28:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:29.802 23:28:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:29.802 23:28:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:29.802 23:28:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.802 23:28:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:29.802 23:28:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:29.802 "name": "Existed_Raid", 00:10:29.802 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:29.802 "strip_size_kb": 0, 00:10:29.802 "state": "configuring", 00:10:29.802 "raid_level": "raid1", 00:10:29.802 "superblock": false, 00:10:29.802 "num_base_bdevs": 4, 00:10:29.802 "num_base_bdevs_discovered": 1, 00:10:29.802 "num_base_bdevs_operational": 4, 00:10:29.802 "base_bdevs_list": [ 00:10:29.802 { 00:10:29.802 "name": "BaseBdev1", 00:10:29.802 "uuid": "5f16db77-e953-4922-95c2-1ff0ed987121", 00:10:29.802 "is_configured": true, 00:10:29.802 "data_offset": 0, 00:10:29.802 "data_size": 65536 00:10:29.802 }, 00:10:29.802 { 00:10:29.802 "name": "BaseBdev2", 00:10:29.802 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:29.802 "is_configured": false, 00:10:29.802 "data_offset": 0, 00:10:29.802 "data_size": 0 00:10:29.802 }, 00:10:29.802 { 00:10:29.802 "name": "BaseBdev3", 00:10:29.802 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:29.802 "is_configured": false, 00:10:29.802 "data_offset": 0, 00:10:29.802 "data_size": 0 00:10:29.802 }, 00:10:29.802 { 00:10:29.802 "name": "BaseBdev4", 00:10:29.802 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:29.802 "is_configured": false, 00:10:29.802 "data_offset": 0, 00:10:29.802 "data_size": 0 00:10:29.802 } 00:10:29.802 ] 00:10:29.802 }' 00:10:29.802 23:28:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:29.802 23:28:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.369 23:28:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:30.369 23:28:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:30.369 23:28:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.369 [2024-09-30 23:28:09.923764] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:30.369 [2024-09-30 23:28:09.923842] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring 00:10:30.369 23:28:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:30.369 23:28:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:30.369 23:28:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:30.369 23:28:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.369 [2024-09-30 23:28:09.935769] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:30.369 [2024-09-30 23:28:09.937666] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:30.369 [2024-09-30 23:28:09.937724] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:30.369 [2024-09-30 23:28:09.937733] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:30.369 [2024-09-30 23:28:09.937742] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:30.369 [2024-09-30 23:28:09.937748] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:30.369 [2024-09-30 23:28:09.937756] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:30.369 23:28:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:30.369 23:28:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:10:30.369 23:28:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:30.369 23:28:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:30.369 23:28:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:30.369 23:28:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:30.369 23:28:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:30.369 23:28:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:30.369 23:28:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:30.369 23:28:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:30.369 23:28:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:30.369 23:28:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:30.369 23:28:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:30.369 23:28:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:30.369 23:28:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:30.369 23:28:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:30.369 23:28:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.369 23:28:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:30.369 23:28:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:30.369 "name": "Existed_Raid", 00:10:30.369 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:30.369 "strip_size_kb": 0, 00:10:30.369 "state": "configuring", 00:10:30.369 "raid_level": "raid1", 00:10:30.369 "superblock": false, 00:10:30.369 "num_base_bdevs": 4, 00:10:30.369 "num_base_bdevs_discovered": 1, 00:10:30.369 "num_base_bdevs_operational": 4, 00:10:30.369 "base_bdevs_list": [ 00:10:30.369 { 00:10:30.369 "name": "BaseBdev1", 00:10:30.369 "uuid": "5f16db77-e953-4922-95c2-1ff0ed987121", 00:10:30.369 "is_configured": true, 00:10:30.369 "data_offset": 0, 00:10:30.369 "data_size": 65536 00:10:30.369 }, 00:10:30.369 { 00:10:30.369 "name": "BaseBdev2", 00:10:30.369 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:30.369 "is_configured": false, 00:10:30.369 "data_offset": 0, 00:10:30.369 "data_size": 0 00:10:30.369 }, 00:10:30.369 { 00:10:30.369 "name": "BaseBdev3", 00:10:30.369 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:30.369 "is_configured": false, 00:10:30.369 "data_offset": 0, 00:10:30.369 "data_size": 0 00:10:30.369 }, 00:10:30.369 { 00:10:30.369 "name": "BaseBdev4", 00:10:30.369 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:30.369 "is_configured": false, 00:10:30.369 "data_offset": 0, 00:10:30.369 "data_size": 0 00:10:30.369 } 00:10:30.369 ] 00:10:30.369 }' 00:10:30.369 23:28:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:30.369 23:28:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.629 23:28:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:30.629 23:28:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:30.629 23:28:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.629 [2024-09-30 23:28:10.401013] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:30.629 BaseBdev2 00:10:30.629 23:28:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:30.629 23:28:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:10:30.629 23:28:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:10:30.629 23:28:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:30.630 23:28:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:30.630 23:28:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:30.630 23:28:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:30.630 23:28:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:30.630 23:28:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:30.630 23:28:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.630 23:28:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:30.630 23:28:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:30.630 23:28:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:30.630 23:28:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.630 [ 00:10:30.630 { 00:10:30.630 "name": "BaseBdev2", 00:10:30.630 "aliases": [ 00:10:30.630 "a66a714e-fb6a-4c6f-b260-4162a4eb7330" 00:10:30.630 ], 00:10:30.630 "product_name": "Malloc disk", 00:10:30.630 "block_size": 512, 00:10:30.630 "num_blocks": 65536, 00:10:30.630 "uuid": "a66a714e-fb6a-4c6f-b260-4162a4eb7330", 00:10:30.630 "assigned_rate_limits": { 00:10:30.630 "rw_ios_per_sec": 0, 00:10:30.630 "rw_mbytes_per_sec": 0, 00:10:30.630 "r_mbytes_per_sec": 0, 00:10:30.630 "w_mbytes_per_sec": 0 00:10:30.630 }, 00:10:30.630 "claimed": true, 00:10:30.630 "claim_type": "exclusive_write", 00:10:30.630 "zoned": false, 00:10:30.630 "supported_io_types": { 00:10:30.630 "read": true, 00:10:30.630 "write": true, 00:10:30.630 "unmap": true, 00:10:30.630 "flush": true, 00:10:30.630 "reset": true, 00:10:30.630 "nvme_admin": false, 00:10:30.630 "nvme_io": false, 00:10:30.630 "nvme_io_md": false, 00:10:30.630 "write_zeroes": true, 00:10:30.630 "zcopy": true, 00:10:30.630 "get_zone_info": false, 00:10:30.630 "zone_management": false, 00:10:30.630 "zone_append": false, 00:10:30.630 "compare": false, 00:10:30.630 "compare_and_write": false, 00:10:30.630 "abort": true, 00:10:30.630 "seek_hole": false, 00:10:30.630 "seek_data": false, 00:10:30.630 "copy": true, 00:10:30.630 "nvme_iov_md": false 00:10:30.630 }, 00:10:30.630 "memory_domains": [ 00:10:30.630 { 00:10:30.630 "dma_device_id": "system", 00:10:30.630 "dma_device_type": 1 00:10:30.630 }, 00:10:30.630 { 00:10:30.630 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:30.630 "dma_device_type": 2 00:10:30.630 } 00:10:30.630 ], 00:10:30.630 "driver_specific": {} 00:10:30.630 } 00:10:30.630 ] 00:10:30.630 23:28:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:30.630 23:28:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:30.630 23:28:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:30.630 23:28:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:30.630 23:28:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:30.630 23:28:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:30.630 23:28:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:30.630 23:28:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:30.630 23:28:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:30.630 23:28:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:30.630 23:28:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:30.630 23:28:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:30.630 23:28:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:30.630 23:28:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:30.630 23:28:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:30.630 23:28:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:30.630 23:28:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.630 23:28:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:30.630 23:28:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:30.887 23:28:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:30.887 "name": "Existed_Raid", 00:10:30.887 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:30.887 "strip_size_kb": 0, 00:10:30.887 "state": "configuring", 00:10:30.887 "raid_level": "raid1", 00:10:30.887 "superblock": false, 00:10:30.887 "num_base_bdevs": 4, 00:10:30.887 "num_base_bdevs_discovered": 2, 00:10:30.887 "num_base_bdevs_operational": 4, 00:10:30.887 "base_bdevs_list": [ 00:10:30.887 { 00:10:30.887 "name": "BaseBdev1", 00:10:30.887 "uuid": "5f16db77-e953-4922-95c2-1ff0ed987121", 00:10:30.887 "is_configured": true, 00:10:30.887 "data_offset": 0, 00:10:30.887 "data_size": 65536 00:10:30.887 }, 00:10:30.887 { 00:10:30.887 "name": "BaseBdev2", 00:10:30.887 "uuid": "a66a714e-fb6a-4c6f-b260-4162a4eb7330", 00:10:30.887 "is_configured": true, 00:10:30.887 "data_offset": 0, 00:10:30.887 "data_size": 65536 00:10:30.887 }, 00:10:30.887 { 00:10:30.887 "name": "BaseBdev3", 00:10:30.887 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:30.887 "is_configured": false, 00:10:30.888 "data_offset": 0, 00:10:30.888 "data_size": 0 00:10:30.888 }, 00:10:30.888 { 00:10:30.888 "name": "BaseBdev4", 00:10:30.888 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:30.888 "is_configured": false, 00:10:30.888 "data_offset": 0, 00:10:30.888 "data_size": 0 00:10:30.888 } 00:10:30.888 ] 00:10:30.888 }' 00:10:30.888 23:28:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:30.888 23:28:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.146 23:28:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:31.146 23:28:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.146 23:28:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.146 [2024-09-30 23:28:10.899293] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:31.146 BaseBdev3 00:10:31.146 23:28:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.146 23:28:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:10:31.146 23:28:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:10:31.146 23:28:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:31.146 23:28:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:31.146 23:28:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:31.146 23:28:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:31.146 23:28:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:31.146 23:28:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.146 23:28:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.146 23:28:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.146 23:28:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:31.146 23:28:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.146 23:28:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.146 [ 00:10:31.146 { 00:10:31.146 "name": "BaseBdev3", 00:10:31.146 "aliases": [ 00:10:31.146 "ff471473-d09c-42cf-91a7-af2e8def327c" 00:10:31.146 ], 00:10:31.146 "product_name": "Malloc disk", 00:10:31.146 "block_size": 512, 00:10:31.146 "num_blocks": 65536, 00:10:31.146 "uuid": "ff471473-d09c-42cf-91a7-af2e8def327c", 00:10:31.146 "assigned_rate_limits": { 00:10:31.146 "rw_ios_per_sec": 0, 00:10:31.146 "rw_mbytes_per_sec": 0, 00:10:31.146 "r_mbytes_per_sec": 0, 00:10:31.146 "w_mbytes_per_sec": 0 00:10:31.146 }, 00:10:31.146 "claimed": true, 00:10:31.146 "claim_type": "exclusive_write", 00:10:31.146 "zoned": false, 00:10:31.146 "supported_io_types": { 00:10:31.146 "read": true, 00:10:31.146 "write": true, 00:10:31.146 "unmap": true, 00:10:31.146 "flush": true, 00:10:31.146 "reset": true, 00:10:31.147 "nvme_admin": false, 00:10:31.147 "nvme_io": false, 00:10:31.147 "nvme_io_md": false, 00:10:31.147 "write_zeroes": true, 00:10:31.147 "zcopy": true, 00:10:31.147 "get_zone_info": false, 00:10:31.147 "zone_management": false, 00:10:31.147 "zone_append": false, 00:10:31.147 "compare": false, 00:10:31.147 "compare_and_write": false, 00:10:31.147 "abort": true, 00:10:31.147 "seek_hole": false, 00:10:31.147 "seek_data": false, 00:10:31.147 "copy": true, 00:10:31.147 "nvme_iov_md": false 00:10:31.147 }, 00:10:31.147 "memory_domains": [ 00:10:31.147 { 00:10:31.147 "dma_device_id": "system", 00:10:31.147 "dma_device_type": 1 00:10:31.147 }, 00:10:31.147 { 00:10:31.147 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:31.147 "dma_device_type": 2 00:10:31.147 } 00:10:31.147 ], 00:10:31.147 "driver_specific": {} 00:10:31.147 } 00:10:31.147 ] 00:10:31.147 23:28:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.147 23:28:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:31.147 23:28:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:31.147 23:28:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:31.147 23:28:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:31.147 23:28:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:31.147 23:28:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:31.147 23:28:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:31.147 23:28:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:31.147 23:28:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:31.147 23:28:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:31.147 23:28:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:31.147 23:28:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:31.147 23:28:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:31.147 23:28:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:31.147 23:28:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:31.147 23:28:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.147 23:28:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.147 23:28:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.147 23:28:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:31.147 "name": "Existed_Raid", 00:10:31.147 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:31.147 "strip_size_kb": 0, 00:10:31.147 "state": "configuring", 00:10:31.147 "raid_level": "raid1", 00:10:31.147 "superblock": false, 00:10:31.147 "num_base_bdevs": 4, 00:10:31.147 "num_base_bdevs_discovered": 3, 00:10:31.147 "num_base_bdevs_operational": 4, 00:10:31.147 "base_bdevs_list": [ 00:10:31.147 { 00:10:31.147 "name": "BaseBdev1", 00:10:31.147 "uuid": "5f16db77-e953-4922-95c2-1ff0ed987121", 00:10:31.147 "is_configured": true, 00:10:31.147 "data_offset": 0, 00:10:31.147 "data_size": 65536 00:10:31.147 }, 00:10:31.147 { 00:10:31.147 "name": "BaseBdev2", 00:10:31.147 "uuid": "a66a714e-fb6a-4c6f-b260-4162a4eb7330", 00:10:31.147 "is_configured": true, 00:10:31.147 "data_offset": 0, 00:10:31.147 "data_size": 65536 00:10:31.147 }, 00:10:31.147 { 00:10:31.147 "name": "BaseBdev3", 00:10:31.147 "uuid": "ff471473-d09c-42cf-91a7-af2e8def327c", 00:10:31.147 "is_configured": true, 00:10:31.147 "data_offset": 0, 00:10:31.147 "data_size": 65536 00:10:31.147 }, 00:10:31.147 { 00:10:31.147 "name": "BaseBdev4", 00:10:31.147 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:31.147 "is_configured": false, 00:10:31.147 "data_offset": 0, 00:10:31.147 "data_size": 0 00:10:31.147 } 00:10:31.147 ] 00:10:31.147 }' 00:10:31.147 23:28:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:31.147 23:28:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.713 23:28:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:31.713 23:28:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.713 23:28:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.713 [2024-09-30 23:28:11.425527] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:31.713 [2024-09-30 23:28:11.425587] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:10:31.713 [2024-09-30 23:28:11.425595] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:10:31.713 [2024-09-30 23:28:11.425901] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:10:31.713 [2024-09-30 23:28:11.426052] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:10:31.714 [2024-09-30 23:28:11.426076] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980 00:10:31.714 [2024-09-30 23:28:11.426273] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:31.714 BaseBdev4 00:10:31.714 23:28:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.714 23:28:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:10:31.714 23:28:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:10:31.714 23:28:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:31.714 23:28:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:31.714 23:28:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:31.714 23:28:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:31.714 23:28:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:31.714 23:28:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.714 23:28:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.714 23:28:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.714 23:28:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:31.714 23:28:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.714 23:28:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.714 [ 00:10:31.714 { 00:10:31.714 "name": "BaseBdev4", 00:10:31.714 "aliases": [ 00:10:31.714 "27387f55-0a86-4f37-b681-701cf06d3d35" 00:10:31.714 ], 00:10:31.714 "product_name": "Malloc disk", 00:10:31.714 "block_size": 512, 00:10:31.714 "num_blocks": 65536, 00:10:31.714 "uuid": "27387f55-0a86-4f37-b681-701cf06d3d35", 00:10:31.714 "assigned_rate_limits": { 00:10:31.714 "rw_ios_per_sec": 0, 00:10:31.714 "rw_mbytes_per_sec": 0, 00:10:31.714 "r_mbytes_per_sec": 0, 00:10:31.714 "w_mbytes_per_sec": 0 00:10:31.714 }, 00:10:31.714 "claimed": true, 00:10:31.714 "claim_type": "exclusive_write", 00:10:31.714 "zoned": false, 00:10:31.714 "supported_io_types": { 00:10:31.714 "read": true, 00:10:31.714 "write": true, 00:10:31.714 "unmap": true, 00:10:31.714 "flush": true, 00:10:31.714 "reset": true, 00:10:31.714 "nvme_admin": false, 00:10:31.714 "nvme_io": false, 00:10:31.714 "nvme_io_md": false, 00:10:31.714 "write_zeroes": true, 00:10:31.714 "zcopy": true, 00:10:31.714 "get_zone_info": false, 00:10:31.714 "zone_management": false, 00:10:31.714 "zone_append": false, 00:10:31.714 "compare": false, 00:10:31.714 "compare_and_write": false, 00:10:31.714 "abort": true, 00:10:31.714 "seek_hole": false, 00:10:31.714 "seek_data": false, 00:10:31.714 "copy": true, 00:10:31.714 "nvme_iov_md": false 00:10:31.714 }, 00:10:31.714 "memory_domains": [ 00:10:31.714 { 00:10:31.714 "dma_device_id": "system", 00:10:31.714 "dma_device_type": 1 00:10:31.714 }, 00:10:31.714 { 00:10:31.714 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:31.714 "dma_device_type": 2 00:10:31.714 } 00:10:31.714 ], 00:10:31.714 "driver_specific": {} 00:10:31.714 } 00:10:31.714 ] 00:10:31.714 23:28:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.714 23:28:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:31.714 23:28:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:31.714 23:28:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:31.714 23:28:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:10:31.714 23:28:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:31.714 23:28:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:31.714 23:28:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:31.714 23:28:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:31.714 23:28:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:31.714 23:28:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:31.714 23:28:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:31.714 23:28:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:31.714 23:28:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:31.714 23:28:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:31.714 23:28:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:31.714 23:28:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.714 23:28:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.714 23:28:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.714 23:28:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:31.714 "name": "Existed_Raid", 00:10:31.714 "uuid": "527b2cba-5374-404f-8f91-f4e5883d3c9f", 00:10:31.714 "strip_size_kb": 0, 00:10:31.714 "state": "online", 00:10:31.714 "raid_level": "raid1", 00:10:31.714 "superblock": false, 00:10:31.714 "num_base_bdevs": 4, 00:10:31.714 "num_base_bdevs_discovered": 4, 00:10:31.714 "num_base_bdevs_operational": 4, 00:10:31.714 "base_bdevs_list": [ 00:10:31.714 { 00:10:31.714 "name": "BaseBdev1", 00:10:31.714 "uuid": "5f16db77-e953-4922-95c2-1ff0ed987121", 00:10:31.714 "is_configured": true, 00:10:31.714 "data_offset": 0, 00:10:31.714 "data_size": 65536 00:10:31.714 }, 00:10:31.714 { 00:10:31.714 "name": "BaseBdev2", 00:10:31.714 "uuid": "a66a714e-fb6a-4c6f-b260-4162a4eb7330", 00:10:31.714 "is_configured": true, 00:10:31.714 "data_offset": 0, 00:10:31.714 "data_size": 65536 00:10:31.714 }, 00:10:31.714 { 00:10:31.714 "name": "BaseBdev3", 00:10:31.714 "uuid": "ff471473-d09c-42cf-91a7-af2e8def327c", 00:10:31.714 "is_configured": true, 00:10:31.714 "data_offset": 0, 00:10:31.714 "data_size": 65536 00:10:31.714 }, 00:10:31.714 { 00:10:31.714 "name": "BaseBdev4", 00:10:31.714 "uuid": "27387f55-0a86-4f37-b681-701cf06d3d35", 00:10:31.714 "is_configured": true, 00:10:31.714 "data_offset": 0, 00:10:31.714 "data_size": 65536 00:10:31.714 } 00:10:31.714 ] 00:10:31.714 }' 00:10:31.714 23:28:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:31.714 23:28:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.282 23:28:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:10:32.282 23:28:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:32.282 23:28:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:32.282 23:28:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:32.282 23:28:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:32.282 23:28:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:32.282 23:28:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:32.282 23:28:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:32.282 23:28:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:32.282 23:28:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.282 [2024-09-30 23:28:11.925024] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:32.282 23:28:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:32.282 23:28:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:32.282 "name": "Existed_Raid", 00:10:32.282 "aliases": [ 00:10:32.282 "527b2cba-5374-404f-8f91-f4e5883d3c9f" 00:10:32.282 ], 00:10:32.282 "product_name": "Raid Volume", 00:10:32.282 "block_size": 512, 00:10:32.282 "num_blocks": 65536, 00:10:32.282 "uuid": "527b2cba-5374-404f-8f91-f4e5883d3c9f", 00:10:32.282 "assigned_rate_limits": { 00:10:32.282 "rw_ios_per_sec": 0, 00:10:32.282 "rw_mbytes_per_sec": 0, 00:10:32.282 "r_mbytes_per_sec": 0, 00:10:32.282 "w_mbytes_per_sec": 0 00:10:32.282 }, 00:10:32.282 "claimed": false, 00:10:32.282 "zoned": false, 00:10:32.282 "supported_io_types": { 00:10:32.282 "read": true, 00:10:32.282 "write": true, 00:10:32.282 "unmap": false, 00:10:32.282 "flush": false, 00:10:32.282 "reset": true, 00:10:32.282 "nvme_admin": false, 00:10:32.282 "nvme_io": false, 00:10:32.282 "nvme_io_md": false, 00:10:32.282 "write_zeroes": true, 00:10:32.282 "zcopy": false, 00:10:32.282 "get_zone_info": false, 00:10:32.282 "zone_management": false, 00:10:32.282 "zone_append": false, 00:10:32.282 "compare": false, 00:10:32.282 "compare_and_write": false, 00:10:32.282 "abort": false, 00:10:32.282 "seek_hole": false, 00:10:32.282 "seek_data": false, 00:10:32.282 "copy": false, 00:10:32.282 "nvme_iov_md": false 00:10:32.282 }, 00:10:32.282 "memory_domains": [ 00:10:32.282 { 00:10:32.282 "dma_device_id": "system", 00:10:32.282 "dma_device_type": 1 00:10:32.282 }, 00:10:32.282 { 00:10:32.282 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:32.282 "dma_device_type": 2 00:10:32.282 }, 00:10:32.282 { 00:10:32.282 "dma_device_id": "system", 00:10:32.282 "dma_device_type": 1 00:10:32.282 }, 00:10:32.282 { 00:10:32.282 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:32.282 "dma_device_type": 2 00:10:32.282 }, 00:10:32.282 { 00:10:32.282 "dma_device_id": "system", 00:10:32.282 "dma_device_type": 1 00:10:32.282 }, 00:10:32.282 { 00:10:32.282 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:32.282 "dma_device_type": 2 00:10:32.282 }, 00:10:32.282 { 00:10:32.282 "dma_device_id": "system", 00:10:32.282 "dma_device_type": 1 00:10:32.282 }, 00:10:32.282 { 00:10:32.282 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:32.282 "dma_device_type": 2 00:10:32.282 } 00:10:32.282 ], 00:10:32.282 "driver_specific": { 00:10:32.282 "raid": { 00:10:32.282 "uuid": "527b2cba-5374-404f-8f91-f4e5883d3c9f", 00:10:32.282 "strip_size_kb": 0, 00:10:32.282 "state": "online", 00:10:32.282 "raid_level": "raid1", 00:10:32.282 "superblock": false, 00:10:32.282 "num_base_bdevs": 4, 00:10:32.282 "num_base_bdevs_discovered": 4, 00:10:32.282 "num_base_bdevs_operational": 4, 00:10:32.282 "base_bdevs_list": [ 00:10:32.282 { 00:10:32.282 "name": "BaseBdev1", 00:10:32.282 "uuid": "5f16db77-e953-4922-95c2-1ff0ed987121", 00:10:32.282 "is_configured": true, 00:10:32.282 "data_offset": 0, 00:10:32.282 "data_size": 65536 00:10:32.282 }, 00:10:32.282 { 00:10:32.282 "name": "BaseBdev2", 00:10:32.282 "uuid": "a66a714e-fb6a-4c6f-b260-4162a4eb7330", 00:10:32.282 "is_configured": true, 00:10:32.282 "data_offset": 0, 00:10:32.282 "data_size": 65536 00:10:32.282 }, 00:10:32.282 { 00:10:32.282 "name": "BaseBdev3", 00:10:32.282 "uuid": "ff471473-d09c-42cf-91a7-af2e8def327c", 00:10:32.282 "is_configured": true, 00:10:32.282 "data_offset": 0, 00:10:32.282 "data_size": 65536 00:10:32.282 }, 00:10:32.282 { 00:10:32.282 "name": "BaseBdev4", 00:10:32.282 "uuid": "27387f55-0a86-4f37-b681-701cf06d3d35", 00:10:32.282 "is_configured": true, 00:10:32.282 "data_offset": 0, 00:10:32.282 "data_size": 65536 00:10:32.282 } 00:10:32.282 ] 00:10:32.282 } 00:10:32.282 } 00:10:32.282 }' 00:10:32.282 23:28:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:32.282 23:28:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:10:32.282 BaseBdev2 00:10:32.282 BaseBdev3 00:10:32.282 BaseBdev4' 00:10:32.282 23:28:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:32.282 23:28:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:32.282 23:28:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:32.282 23:28:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:10:32.282 23:28:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:32.282 23:28:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:32.282 23:28:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.282 23:28:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:32.282 23:28:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:32.282 23:28:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:32.282 23:28:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:32.282 23:28:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:32.282 23:28:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:32.282 23:28:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:32.282 23:28:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.282 23:28:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:32.542 23:28:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:32.542 23:28:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:32.542 23:28:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:32.542 23:28:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:32.542 23:28:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:32.542 23:28:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.542 23:28:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:32.542 23:28:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:32.542 23:28:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:32.542 23:28:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:32.542 23:28:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:32.542 23:28:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:32.542 23:28:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:32.542 23:28:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:32.542 23:28:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.542 23:28:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:32.542 23:28:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:32.542 23:28:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:32.542 23:28:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:32.542 23:28:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:32.542 23:28:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.542 [2024-09-30 23:28:12.248157] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:32.542 23:28:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:32.542 23:28:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:10:32.542 23:28:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:10:32.542 23:28:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:32.542 23:28:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:10:32.542 23:28:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:10:32.542 23:28:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:10:32.542 23:28:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:32.542 23:28:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:32.543 23:28:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:32.543 23:28:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:32.543 23:28:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:32.543 23:28:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:32.543 23:28:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:32.543 23:28:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:32.543 23:28:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:32.543 23:28:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:32.543 23:28:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:32.543 23:28:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.543 23:28:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:32.543 23:28:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:32.543 23:28:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:32.543 "name": "Existed_Raid", 00:10:32.543 "uuid": "527b2cba-5374-404f-8f91-f4e5883d3c9f", 00:10:32.543 "strip_size_kb": 0, 00:10:32.543 "state": "online", 00:10:32.543 "raid_level": "raid1", 00:10:32.543 "superblock": false, 00:10:32.543 "num_base_bdevs": 4, 00:10:32.543 "num_base_bdevs_discovered": 3, 00:10:32.543 "num_base_bdevs_operational": 3, 00:10:32.543 "base_bdevs_list": [ 00:10:32.543 { 00:10:32.543 "name": null, 00:10:32.543 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:32.543 "is_configured": false, 00:10:32.543 "data_offset": 0, 00:10:32.543 "data_size": 65536 00:10:32.543 }, 00:10:32.543 { 00:10:32.543 "name": "BaseBdev2", 00:10:32.543 "uuid": "a66a714e-fb6a-4c6f-b260-4162a4eb7330", 00:10:32.543 "is_configured": true, 00:10:32.543 "data_offset": 0, 00:10:32.543 "data_size": 65536 00:10:32.543 }, 00:10:32.543 { 00:10:32.543 "name": "BaseBdev3", 00:10:32.543 "uuid": "ff471473-d09c-42cf-91a7-af2e8def327c", 00:10:32.543 "is_configured": true, 00:10:32.543 "data_offset": 0, 00:10:32.543 "data_size": 65536 00:10:32.543 }, 00:10:32.543 { 00:10:32.543 "name": "BaseBdev4", 00:10:32.543 "uuid": "27387f55-0a86-4f37-b681-701cf06d3d35", 00:10:32.543 "is_configured": true, 00:10:32.543 "data_offset": 0, 00:10:32.543 "data_size": 65536 00:10:32.543 } 00:10:32.543 ] 00:10:32.543 }' 00:10:32.543 23:28:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:32.543 23:28:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.111 23:28:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:10:33.112 23:28:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:33.112 23:28:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:33.112 23:28:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:33.112 23:28:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:33.112 23:28:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.112 23:28:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:33.112 23:28:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:33.112 23:28:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:33.112 23:28:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:10:33.112 23:28:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:33.112 23:28:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.112 [2024-09-30 23:28:12.798543] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:33.112 23:28:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:33.112 23:28:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:33.112 23:28:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:33.112 23:28:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:33.112 23:28:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:33.112 23:28:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:33.112 23:28:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.112 23:28:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:33.112 23:28:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:33.112 23:28:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:33.112 23:28:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:10:33.112 23:28:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:33.112 23:28:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.112 [2024-09-30 23:28:12.869624] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:33.112 23:28:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:33.112 23:28:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:33.112 23:28:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:33.112 23:28:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:33.112 23:28:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:33.112 23:28:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.112 23:28:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:33.112 23:28:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:33.112 23:28:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:33.112 23:28:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:33.112 23:28:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:10:33.112 23:28:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:33.112 23:28:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.112 [2024-09-30 23:28:12.936341] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:10:33.112 [2024-09-30 23:28:12.936435] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:33.112 [2024-09-30 23:28:12.947588] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:33.112 [2024-09-30 23:28:12.947647] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:33.112 [2024-09-30 23:28:12.947659] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline 00:10:33.112 23:28:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:33.112 23:28:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:33.112 23:28:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:33.112 23:28:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:33.112 23:28:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:10:33.112 23:28:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:33.112 23:28:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.112 23:28:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:33.373 23:28:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:10:33.373 23:28:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:10:33.373 23:28:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:10:33.373 23:28:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:10:33.373 23:28:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:33.373 23:28:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:33.373 23:28:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:33.373 23:28:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.373 BaseBdev2 00:10:33.373 23:28:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:33.373 23:28:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:10:33.373 23:28:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:10:33.373 23:28:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:33.373 23:28:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:33.373 23:28:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:33.373 23:28:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:33.373 23:28:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:33.373 23:28:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:33.373 23:28:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.373 23:28:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:33.373 23:28:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:33.373 23:28:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:33.373 23:28:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.373 [ 00:10:33.373 { 00:10:33.373 "name": "BaseBdev2", 00:10:33.373 "aliases": [ 00:10:33.373 "8ebc4e89-a525-49ee-b941-13e8f64ed317" 00:10:33.373 ], 00:10:33.373 "product_name": "Malloc disk", 00:10:33.373 "block_size": 512, 00:10:33.373 "num_blocks": 65536, 00:10:33.373 "uuid": "8ebc4e89-a525-49ee-b941-13e8f64ed317", 00:10:33.373 "assigned_rate_limits": { 00:10:33.373 "rw_ios_per_sec": 0, 00:10:33.373 "rw_mbytes_per_sec": 0, 00:10:33.373 "r_mbytes_per_sec": 0, 00:10:33.373 "w_mbytes_per_sec": 0 00:10:33.373 }, 00:10:33.373 "claimed": false, 00:10:33.373 "zoned": false, 00:10:33.373 "supported_io_types": { 00:10:33.373 "read": true, 00:10:33.373 "write": true, 00:10:33.373 "unmap": true, 00:10:33.373 "flush": true, 00:10:33.373 "reset": true, 00:10:33.373 "nvme_admin": false, 00:10:33.373 "nvme_io": false, 00:10:33.373 "nvme_io_md": false, 00:10:33.373 "write_zeroes": true, 00:10:33.373 "zcopy": true, 00:10:33.373 "get_zone_info": false, 00:10:33.373 "zone_management": false, 00:10:33.373 "zone_append": false, 00:10:33.373 "compare": false, 00:10:33.373 "compare_and_write": false, 00:10:33.373 "abort": true, 00:10:33.373 "seek_hole": false, 00:10:33.373 "seek_data": false, 00:10:33.373 "copy": true, 00:10:33.373 "nvme_iov_md": false 00:10:33.373 }, 00:10:33.373 "memory_domains": [ 00:10:33.373 { 00:10:33.373 "dma_device_id": "system", 00:10:33.373 "dma_device_type": 1 00:10:33.373 }, 00:10:33.373 { 00:10:33.373 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:33.373 "dma_device_type": 2 00:10:33.373 } 00:10:33.373 ], 00:10:33.373 "driver_specific": {} 00:10:33.373 } 00:10:33.373 ] 00:10:33.373 23:28:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:33.373 23:28:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:33.373 23:28:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:33.373 23:28:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:33.373 23:28:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:33.373 23:28:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:33.373 23:28:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.373 BaseBdev3 00:10:33.373 23:28:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:33.373 23:28:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:10:33.373 23:28:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:10:33.373 23:28:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:33.373 23:28:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:33.373 23:28:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:33.373 23:28:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:33.373 23:28:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:33.373 23:28:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:33.373 23:28:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.373 23:28:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:33.373 23:28:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:33.373 23:28:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:33.373 23:28:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.373 [ 00:10:33.373 { 00:10:33.373 "name": "BaseBdev3", 00:10:33.373 "aliases": [ 00:10:33.373 "958e9355-0c73-4579-b05c-cbe0a15b1df4" 00:10:33.373 ], 00:10:33.373 "product_name": "Malloc disk", 00:10:33.373 "block_size": 512, 00:10:33.373 "num_blocks": 65536, 00:10:33.373 "uuid": "958e9355-0c73-4579-b05c-cbe0a15b1df4", 00:10:33.373 "assigned_rate_limits": { 00:10:33.373 "rw_ios_per_sec": 0, 00:10:33.373 "rw_mbytes_per_sec": 0, 00:10:33.373 "r_mbytes_per_sec": 0, 00:10:33.373 "w_mbytes_per_sec": 0 00:10:33.374 }, 00:10:33.374 "claimed": false, 00:10:33.374 "zoned": false, 00:10:33.374 "supported_io_types": { 00:10:33.374 "read": true, 00:10:33.374 "write": true, 00:10:33.374 "unmap": true, 00:10:33.374 "flush": true, 00:10:33.374 "reset": true, 00:10:33.374 "nvme_admin": false, 00:10:33.374 "nvme_io": false, 00:10:33.374 "nvme_io_md": false, 00:10:33.374 "write_zeroes": true, 00:10:33.374 "zcopy": true, 00:10:33.374 "get_zone_info": false, 00:10:33.374 "zone_management": false, 00:10:33.374 "zone_append": false, 00:10:33.374 "compare": false, 00:10:33.374 "compare_and_write": false, 00:10:33.374 "abort": true, 00:10:33.374 "seek_hole": false, 00:10:33.374 "seek_data": false, 00:10:33.374 "copy": true, 00:10:33.374 "nvme_iov_md": false 00:10:33.374 }, 00:10:33.374 "memory_domains": [ 00:10:33.374 { 00:10:33.374 "dma_device_id": "system", 00:10:33.374 "dma_device_type": 1 00:10:33.374 }, 00:10:33.374 { 00:10:33.374 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:33.374 "dma_device_type": 2 00:10:33.374 } 00:10:33.374 ], 00:10:33.374 "driver_specific": {} 00:10:33.374 } 00:10:33.374 ] 00:10:33.374 23:28:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:33.374 23:28:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:33.374 23:28:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:33.374 23:28:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:33.374 23:28:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:33.374 23:28:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:33.374 23:28:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.374 BaseBdev4 00:10:33.374 23:28:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:33.374 23:28:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:10:33.374 23:28:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:10:33.374 23:28:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:33.374 23:28:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:33.374 23:28:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:33.374 23:28:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:33.374 23:28:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:33.374 23:28:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:33.374 23:28:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.374 23:28:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:33.374 23:28:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:33.374 23:28:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:33.374 23:28:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.374 [ 00:10:33.374 { 00:10:33.374 "name": "BaseBdev4", 00:10:33.374 "aliases": [ 00:10:33.374 "ca3b03ba-84b8-45b5-aea7-a22f29aa889e" 00:10:33.374 ], 00:10:33.374 "product_name": "Malloc disk", 00:10:33.374 "block_size": 512, 00:10:33.374 "num_blocks": 65536, 00:10:33.374 "uuid": "ca3b03ba-84b8-45b5-aea7-a22f29aa889e", 00:10:33.374 "assigned_rate_limits": { 00:10:33.374 "rw_ios_per_sec": 0, 00:10:33.374 "rw_mbytes_per_sec": 0, 00:10:33.374 "r_mbytes_per_sec": 0, 00:10:33.374 "w_mbytes_per_sec": 0 00:10:33.374 }, 00:10:33.374 "claimed": false, 00:10:33.374 "zoned": false, 00:10:33.374 "supported_io_types": { 00:10:33.374 "read": true, 00:10:33.374 "write": true, 00:10:33.374 "unmap": true, 00:10:33.374 "flush": true, 00:10:33.374 "reset": true, 00:10:33.374 "nvme_admin": false, 00:10:33.374 "nvme_io": false, 00:10:33.374 "nvme_io_md": false, 00:10:33.374 "write_zeroes": true, 00:10:33.374 "zcopy": true, 00:10:33.374 "get_zone_info": false, 00:10:33.374 "zone_management": false, 00:10:33.374 "zone_append": false, 00:10:33.374 "compare": false, 00:10:33.374 "compare_and_write": false, 00:10:33.374 "abort": true, 00:10:33.374 "seek_hole": false, 00:10:33.374 "seek_data": false, 00:10:33.374 "copy": true, 00:10:33.374 "nvme_iov_md": false 00:10:33.374 }, 00:10:33.374 "memory_domains": [ 00:10:33.374 { 00:10:33.374 "dma_device_id": "system", 00:10:33.374 "dma_device_type": 1 00:10:33.374 }, 00:10:33.374 { 00:10:33.374 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:33.374 "dma_device_type": 2 00:10:33.374 } 00:10:33.374 ], 00:10:33.374 "driver_specific": {} 00:10:33.374 } 00:10:33.374 ] 00:10:33.374 23:28:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:33.374 23:28:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:33.374 23:28:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:33.374 23:28:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:33.374 23:28:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:33.374 23:28:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:33.374 23:28:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.374 [2024-09-30 23:28:13.151766] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:33.374 [2024-09-30 23:28:13.151824] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:33.374 [2024-09-30 23:28:13.151843] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:33.374 [2024-09-30 23:28:13.153606] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:33.374 [2024-09-30 23:28:13.153667] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:33.374 23:28:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:33.374 23:28:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:33.374 23:28:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:33.374 23:28:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:33.374 23:28:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:33.374 23:28:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:33.374 23:28:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:33.374 23:28:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:33.374 23:28:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:33.374 23:28:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:33.374 23:28:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:33.374 23:28:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:33.374 23:28:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:33.374 23:28:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:33.374 23:28:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.374 23:28:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:33.374 23:28:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:33.374 "name": "Existed_Raid", 00:10:33.374 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:33.374 "strip_size_kb": 0, 00:10:33.374 "state": "configuring", 00:10:33.374 "raid_level": "raid1", 00:10:33.374 "superblock": false, 00:10:33.374 "num_base_bdevs": 4, 00:10:33.374 "num_base_bdevs_discovered": 3, 00:10:33.374 "num_base_bdevs_operational": 4, 00:10:33.374 "base_bdevs_list": [ 00:10:33.374 { 00:10:33.374 "name": "BaseBdev1", 00:10:33.374 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:33.374 "is_configured": false, 00:10:33.374 "data_offset": 0, 00:10:33.374 "data_size": 0 00:10:33.374 }, 00:10:33.374 { 00:10:33.374 "name": "BaseBdev2", 00:10:33.374 "uuid": "8ebc4e89-a525-49ee-b941-13e8f64ed317", 00:10:33.374 "is_configured": true, 00:10:33.374 "data_offset": 0, 00:10:33.374 "data_size": 65536 00:10:33.375 }, 00:10:33.375 { 00:10:33.375 "name": "BaseBdev3", 00:10:33.375 "uuid": "958e9355-0c73-4579-b05c-cbe0a15b1df4", 00:10:33.375 "is_configured": true, 00:10:33.375 "data_offset": 0, 00:10:33.375 "data_size": 65536 00:10:33.375 }, 00:10:33.375 { 00:10:33.375 "name": "BaseBdev4", 00:10:33.375 "uuid": "ca3b03ba-84b8-45b5-aea7-a22f29aa889e", 00:10:33.375 "is_configured": true, 00:10:33.375 "data_offset": 0, 00:10:33.375 "data_size": 65536 00:10:33.375 } 00:10:33.375 ] 00:10:33.375 }' 00:10:33.375 23:28:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:33.375 23:28:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.944 23:28:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:10:33.944 23:28:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:33.944 23:28:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.944 [2024-09-30 23:28:13.571098] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:33.944 23:28:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:33.944 23:28:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:33.944 23:28:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:33.944 23:28:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:33.944 23:28:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:33.944 23:28:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:33.944 23:28:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:33.944 23:28:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:33.944 23:28:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:33.944 23:28:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:33.944 23:28:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:33.944 23:28:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:33.944 23:28:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:33.944 23:28:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:33.944 23:28:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.944 23:28:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:33.944 23:28:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:33.944 "name": "Existed_Raid", 00:10:33.944 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:33.944 "strip_size_kb": 0, 00:10:33.944 "state": "configuring", 00:10:33.944 "raid_level": "raid1", 00:10:33.944 "superblock": false, 00:10:33.944 "num_base_bdevs": 4, 00:10:33.944 "num_base_bdevs_discovered": 2, 00:10:33.944 "num_base_bdevs_operational": 4, 00:10:33.944 "base_bdevs_list": [ 00:10:33.944 { 00:10:33.944 "name": "BaseBdev1", 00:10:33.944 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:33.944 "is_configured": false, 00:10:33.944 "data_offset": 0, 00:10:33.945 "data_size": 0 00:10:33.945 }, 00:10:33.945 { 00:10:33.945 "name": null, 00:10:33.945 "uuid": "8ebc4e89-a525-49ee-b941-13e8f64ed317", 00:10:33.945 "is_configured": false, 00:10:33.945 "data_offset": 0, 00:10:33.945 "data_size": 65536 00:10:33.945 }, 00:10:33.945 { 00:10:33.945 "name": "BaseBdev3", 00:10:33.945 "uuid": "958e9355-0c73-4579-b05c-cbe0a15b1df4", 00:10:33.945 "is_configured": true, 00:10:33.945 "data_offset": 0, 00:10:33.945 "data_size": 65536 00:10:33.945 }, 00:10:33.945 { 00:10:33.945 "name": "BaseBdev4", 00:10:33.945 "uuid": "ca3b03ba-84b8-45b5-aea7-a22f29aa889e", 00:10:33.945 "is_configured": true, 00:10:33.945 "data_offset": 0, 00:10:33.945 "data_size": 65536 00:10:33.945 } 00:10:33.945 ] 00:10:33.945 }' 00:10:33.945 23:28:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:33.945 23:28:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.204 23:28:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:34.204 23:28:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:34.204 23:28:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:34.204 23:28:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.204 23:28:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:34.204 23:28:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:10:34.204 23:28:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:34.204 23:28:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:34.204 23:28:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.464 [2024-09-30 23:28:14.057199] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:34.464 BaseBdev1 00:10:34.464 23:28:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:34.464 23:28:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:10:34.464 23:28:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:10:34.464 23:28:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:34.464 23:28:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:34.464 23:28:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:34.464 23:28:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:34.464 23:28:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:34.464 23:28:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:34.464 23:28:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.464 23:28:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:34.465 23:28:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:34.465 23:28:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:34.465 23:28:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.465 [ 00:10:34.465 { 00:10:34.465 "name": "BaseBdev1", 00:10:34.465 "aliases": [ 00:10:34.465 "99b51bbc-a1e8-4fe5-bb54-a4d5f9945805" 00:10:34.465 ], 00:10:34.465 "product_name": "Malloc disk", 00:10:34.465 "block_size": 512, 00:10:34.465 "num_blocks": 65536, 00:10:34.465 "uuid": "99b51bbc-a1e8-4fe5-bb54-a4d5f9945805", 00:10:34.465 "assigned_rate_limits": { 00:10:34.465 "rw_ios_per_sec": 0, 00:10:34.465 "rw_mbytes_per_sec": 0, 00:10:34.465 "r_mbytes_per_sec": 0, 00:10:34.465 "w_mbytes_per_sec": 0 00:10:34.465 }, 00:10:34.465 "claimed": true, 00:10:34.465 "claim_type": "exclusive_write", 00:10:34.465 "zoned": false, 00:10:34.465 "supported_io_types": { 00:10:34.465 "read": true, 00:10:34.465 "write": true, 00:10:34.465 "unmap": true, 00:10:34.465 "flush": true, 00:10:34.465 "reset": true, 00:10:34.465 "nvme_admin": false, 00:10:34.465 "nvme_io": false, 00:10:34.465 "nvme_io_md": false, 00:10:34.465 "write_zeroes": true, 00:10:34.465 "zcopy": true, 00:10:34.465 "get_zone_info": false, 00:10:34.465 "zone_management": false, 00:10:34.465 "zone_append": false, 00:10:34.465 "compare": false, 00:10:34.465 "compare_and_write": false, 00:10:34.465 "abort": true, 00:10:34.465 "seek_hole": false, 00:10:34.465 "seek_data": false, 00:10:34.465 "copy": true, 00:10:34.465 "nvme_iov_md": false 00:10:34.465 }, 00:10:34.465 "memory_domains": [ 00:10:34.465 { 00:10:34.465 "dma_device_id": "system", 00:10:34.465 "dma_device_type": 1 00:10:34.465 }, 00:10:34.465 { 00:10:34.465 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:34.465 "dma_device_type": 2 00:10:34.465 } 00:10:34.465 ], 00:10:34.465 "driver_specific": {} 00:10:34.465 } 00:10:34.465 ] 00:10:34.465 23:28:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:34.465 23:28:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:34.465 23:28:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:34.465 23:28:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:34.465 23:28:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:34.465 23:28:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:34.465 23:28:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:34.465 23:28:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:34.465 23:28:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:34.465 23:28:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:34.465 23:28:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:34.465 23:28:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:34.465 23:28:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:34.465 23:28:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:34.465 23:28:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:34.465 23:28:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.465 23:28:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:34.465 23:28:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:34.465 "name": "Existed_Raid", 00:10:34.465 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:34.465 "strip_size_kb": 0, 00:10:34.465 "state": "configuring", 00:10:34.465 "raid_level": "raid1", 00:10:34.465 "superblock": false, 00:10:34.465 "num_base_bdevs": 4, 00:10:34.465 "num_base_bdevs_discovered": 3, 00:10:34.465 "num_base_bdevs_operational": 4, 00:10:34.465 "base_bdevs_list": [ 00:10:34.465 { 00:10:34.465 "name": "BaseBdev1", 00:10:34.465 "uuid": "99b51bbc-a1e8-4fe5-bb54-a4d5f9945805", 00:10:34.465 "is_configured": true, 00:10:34.465 "data_offset": 0, 00:10:34.465 "data_size": 65536 00:10:34.465 }, 00:10:34.465 { 00:10:34.465 "name": null, 00:10:34.465 "uuid": "8ebc4e89-a525-49ee-b941-13e8f64ed317", 00:10:34.465 "is_configured": false, 00:10:34.465 "data_offset": 0, 00:10:34.465 "data_size": 65536 00:10:34.465 }, 00:10:34.465 { 00:10:34.465 "name": "BaseBdev3", 00:10:34.465 "uuid": "958e9355-0c73-4579-b05c-cbe0a15b1df4", 00:10:34.465 "is_configured": true, 00:10:34.465 "data_offset": 0, 00:10:34.465 "data_size": 65536 00:10:34.465 }, 00:10:34.465 { 00:10:34.465 "name": "BaseBdev4", 00:10:34.465 "uuid": "ca3b03ba-84b8-45b5-aea7-a22f29aa889e", 00:10:34.465 "is_configured": true, 00:10:34.465 "data_offset": 0, 00:10:34.465 "data_size": 65536 00:10:34.465 } 00:10:34.465 ] 00:10:34.465 }' 00:10:34.465 23:28:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:34.465 23:28:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.725 23:28:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:34.725 23:28:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:34.725 23:28:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:34.725 23:28:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.725 23:28:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:34.985 23:28:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:10:34.985 23:28:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:10:34.985 23:28:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:34.985 23:28:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.985 [2024-09-30 23:28:14.588352] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:34.985 23:28:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:34.985 23:28:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:34.985 23:28:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:34.985 23:28:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:34.985 23:28:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:34.985 23:28:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:34.985 23:28:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:34.985 23:28:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:34.985 23:28:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:34.985 23:28:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:34.985 23:28:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:34.985 23:28:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:34.985 23:28:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:34.985 23:28:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:34.985 23:28:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.985 23:28:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:34.985 23:28:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:34.985 "name": "Existed_Raid", 00:10:34.985 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:34.985 "strip_size_kb": 0, 00:10:34.985 "state": "configuring", 00:10:34.985 "raid_level": "raid1", 00:10:34.985 "superblock": false, 00:10:34.985 "num_base_bdevs": 4, 00:10:34.985 "num_base_bdevs_discovered": 2, 00:10:34.985 "num_base_bdevs_operational": 4, 00:10:34.985 "base_bdevs_list": [ 00:10:34.985 { 00:10:34.985 "name": "BaseBdev1", 00:10:34.985 "uuid": "99b51bbc-a1e8-4fe5-bb54-a4d5f9945805", 00:10:34.985 "is_configured": true, 00:10:34.985 "data_offset": 0, 00:10:34.985 "data_size": 65536 00:10:34.985 }, 00:10:34.985 { 00:10:34.985 "name": null, 00:10:34.985 "uuid": "8ebc4e89-a525-49ee-b941-13e8f64ed317", 00:10:34.985 "is_configured": false, 00:10:34.985 "data_offset": 0, 00:10:34.985 "data_size": 65536 00:10:34.985 }, 00:10:34.985 { 00:10:34.985 "name": null, 00:10:34.985 "uuid": "958e9355-0c73-4579-b05c-cbe0a15b1df4", 00:10:34.985 "is_configured": false, 00:10:34.985 "data_offset": 0, 00:10:34.985 "data_size": 65536 00:10:34.985 }, 00:10:34.985 { 00:10:34.985 "name": "BaseBdev4", 00:10:34.985 "uuid": "ca3b03ba-84b8-45b5-aea7-a22f29aa889e", 00:10:34.985 "is_configured": true, 00:10:34.985 "data_offset": 0, 00:10:34.985 "data_size": 65536 00:10:34.985 } 00:10:34.985 ] 00:10:34.985 }' 00:10:34.985 23:28:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:34.985 23:28:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.244 23:28:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:35.244 23:28:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.244 23:28:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.244 23:28:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:35.244 23:28:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:35.503 23:28:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:10:35.503 23:28:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:10:35.503 23:28:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.503 23:28:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.503 [2024-09-30 23:28:15.107496] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:35.503 23:28:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:35.503 23:28:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:35.503 23:28:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:35.503 23:28:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:35.503 23:28:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:35.503 23:28:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:35.503 23:28:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:35.503 23:28:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:35.503 23:28:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:35.503 23:28:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:35.503 23:28:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:35.503 23:28:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:35.503 23:28:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.503 23:28:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.503 23:28:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:35.503 23:28:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:35.503 23:28:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:35.503 "name": "Existed_Raid", 00:10:35.503 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:35.503 "strip_size_kb": 0, 00:10:35.503 "state": "configuring", 00:10:35.503 "raid_level": "raid1", 00:10:35.503 "superblock": false, 00:10:35.503 "num_base_bdevs": 4, 00:10:35.503 "num_base_bdevs_discovered": 3, 00:10:35.503 "num_base_bdevs_operational": 4, 00:10:35.503 "base_bdevs_list": [ 00:10:35.503 { 00:10:35.503 "name": "BaseBdev1", 00:10:35.503 "uuid": "99b51bbc-a1e8-4fe5-bb54-a4d5f9945805", 00:10:35.503 "is_configured": true, 00:10:35.503 "data_offset": 0, 00:10:35.503 "data_size": 65536 00:10:35.503 }, 00:10:35.503 { 00:10:35.503 "name": null, 00:10:35.503 "uuid": "8ebc4e89-a525-49ee-b941-13e8f64ed317", 00:10:35.503 "is_configured": false, 00:10:35.503 "data_offset": 0, 00:10:35.503 "data_size": 65536 00:10:35.503 }, 00:10:35.503 { 00:10:35.503 "name": "BaseBdev3", 00:10:35.503 "uuid": "958e9355-0c73-4579-b05c-cbe0a15b1df4", 00:10:35.503 "is_configured": true, 00:10:35.503 "data_offset": 0, 00:10:35.503 "data_size": 65536 00:10:35.503 }, 00:10:35.503 { 00:10:35.503 "name": "BaseBdev4", 00:10:35.503 "uuid": "ca3b03ba-84b8-45b5-aea7-a22f29aa889e", 00:10:35.503 "is_configured": true, 00:10:35.503 "data_offset": 0, 00:10:35.503 "data_size": 65536 00:10:35.503 } 00:10:35.503 ] 00:10:35.503 }' 00:10:35.503 23:28:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:35.503 23:28:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.762 23:28:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:35.762 23:28:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.762 23:28:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.762 23:28:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:35.762 23:28:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:35.762 23:28:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:10:35.762 23:28:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:35.762 23:28:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.762 23:28:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.762 [2024-09-30 23:28:15.606703] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:36.022 23:28:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:36.022 23:28:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:36.022 23:28:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:36.022 23:28:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:36.022 23:28:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:36.022 23:28:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:36.022 23:28:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:36.022 23:28:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:36.022 23:28:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:36.022 23:28:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:36.022 23:28:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:36.022 23:28:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:36.022 23:28:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:36.022 23:28:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:36.022 23:28:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.022 23:28:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:36.022 23:28:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:36.022 "name": "Existed_Raid", 00:10:36.022 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:36.022 "strip_size_kb": 0, 00:10:36.022 "state": "configuring", 00:10:36.022 "raid_level": "raid1", 00:10:36.022 "superblock": false, 00:10:36.022 "num_base_bdevs": 4, 00:10:36.022 "num_base_bdevs_discovered": 2, 00:10:36.022 "num_base_bdevs_operational": 4, 00:10:36.022 "base_bdevs_list": [ 00:10:36.022 { 00:10:36.022 "name": null, 00:10:36.022 "uuid": "99b51bbc-a1e8-4fe5-bb54-a4d5f9945805", 00:10:36.022 "is_configured": false, 00:10:36.022 "data_offset": 0, 00:10:36.022 "data_size": 65536 00:10:36.022 }, 00:10:36.022 { 00:10:36.022 "name": null, 00:10:36.022 "uuid": "8ebc4e89-a525-49ee-b941-13e8f64ed317", 00:10:36.022 "is_configured": false, 00:10:36.022 "data_offset": 0, 00:10:36.022 "data_size": 65536 00:10:36.022 }, 00:10:36.022 { 00:10:36.022 "name": "BaseBdev3", 00:10:36.022 "uuid": "958e9355-0c73-4579-b05c-cbe0a15b1df4", 00:10:36.022 "is_configured": true, 00:10:36.022 "data_offset": 0, 00:10:36.022 "data_size": 65536 00:10:36.022 }, 00:10:36.022 { 00:10:36.022 "name": "BaseBdev4", 00:10:36.022 "uuid": "ca3b03ba-84b8-45b5-aea7-a22f29aa889e", 00:10:36.022 "is_configured": true, 00:10:36.022 "data_offset": 0, 00:10:36.022 "data_size": 65536 00:10:36.022 } 00:10:36.022 ] 00:10:36.022 }' 00:10:36.022 23:28:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:36.022 23:28:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.282 23:28:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:36.282 23:28:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:36.282 23:28:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:36.282 23:28:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.282 23:28:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:36.282 23:28:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:10:36.282 23:28:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:10:36.282 23:28:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:36.282 23:28:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.282 [2024-09-30 23:28:16.120388] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:36.282 23:28:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:36.282 23:28:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:36.282 23:28:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:36.282 23:28:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:36.282 23:28:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:36.282 23:28:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:36.282 23:28:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:36.282 23:28:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:36.282 23:28:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:36.282 23:28:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:36.282 23:28:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:36.282 23:28:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:36.282 23:28:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:36.282 23:28:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.282 23:28:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:36.541 23:28:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:36.541 23:28:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:36.541 "name": "Existed_Raid", 00:10:36.541 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:36.541 "strip_size_kb": 0, 00:10:36.541 "state": "configuring", 00:10:36.541 "raid_level": "raid1", 00:10:36.541 "superblock": false, 00:10:36.541 "num_base_bdevs": 4, 00:10:36.541 "num_base_bdevs_discovered": 3, 00:10:36.541 "num_base_bdevs_operational": 4, 00:10:36.541 "base_bdevs_list": [ 00:10:36.541 { 00:10:36.541 "name": null, 00:10:36.541 "uuid": "99b51bbc-a1e8-4fe5-bb54-a4d5f9945805", 00:10:36.541 "is_configured": false, 00:10:36.541 "data_offset": 0, 00:10:36.541 "data_size": 65536 00:10:36.541 }, 00:10:36.541 { 00:10:36.541 "name": "BaseBdev2", 00:10:36.541 "uuid": "8ebc4e89-a525-49ee-b941-13e8f64ed317", 00:10:36.541 "is_configured": true, 00:10:36.541 "data_offset": 0, 00:10:36.541 "data_size": 65536 00:10:36.541 }, 00:10:36.541 { 00:10:36.541 "name": "BaseBdev3", 00:10:36.541 "uuid": "958e9355-0c73-4579-b05c-cbe0a15b1df4", 00:10:36.541 "is_configured": true, 00:10:36.541 "data_offset": 0, 00:10:36.541 "data_size": 65536 00:10:36.541 }, 00:10:36.541 { 00:10:36.541 "name": "BaseBdev4", 00:10:36.541 "uuid": "ca3b03ba-84b8-45b5-aea7-a22f29aa889e", 00:10:36.541 "is_configured": true, 00:10:36.541 "data_offset": 0, 00:10:36.541 "data_size": 65536 00:10:36.541 } 00:10:36.541 ] 00:10:36.541 }' 00:10:36.541 23:28:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:36.541 23:28:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.801 23:28:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:36.801 23:28:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:36.801 23:28:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.801 23:28:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:36.801 23:28:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:36.801 23:28:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:10:36.801 23:28:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:10:36.801 23:28:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:36.801 23:28:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:36.801 23:28:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.801 23:28:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:36.801 23:28:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 99b51bbc-a1e8-4fe5-bb54-a4d5f9945805 00:10:36.801 23:28:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:36.801 23:28:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.801 [2024-09-30 23:28:16.582615] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:10:36.801 [2024-09-30 23:28:16.582685] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:10:36.801 [2024-09-30 23:28:16.582696] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:10:36.801 [2024-09-30 23:28:16.582950] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:10:36.801 [2024-09-30 23:28:16.583092] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:10:36.801 [2024-09-30 23:28:16.583110] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006d00 00:10:36.801 [2024-09-30 23:28:16.583287] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:36.801 NewBaseBdev 00:10:36.801 23:28:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:36.801 23:28:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:10:36.801 23:28:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:10:36.801 23:28:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:36.801 23:28:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:36.801 23:28:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:36.801 23:28:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:36.801 23:28:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:36.801 23:28:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:36.801 23:28:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.801 23:28:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:36.801 23:28:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:10:36.801 23:28:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:36.801 23:28:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.801 [ 00:10:36.801 { 00:10:36.801 "name": "NewBaseBdev", 00:10:36.801 "aliases": [ 00:10:36.801 "99b51bbc-a1e8-4fe5-bb54-a4d5f9945805" 00:10:36.801 ], 00:10:36.801 "product_name": "Malloc disk", 00:10:36.801 "block_size": 512, 00:10:36.801 "num_blocks": 65536, 00:10:36.801 "uuid": "99b51bbc-a1e8-4fe5-bb54-a4d5f9945805", 00:10:36.801 "assigned_rate_limits": { 00:10:36.801 "rw_ios_per_sec": 0, 00:10:36.801 "rw_mbytes_per_sec": 0, 00:10:36.801 "r_mbytes_per_sec": 0, 00:10:36.801 "w_mbytes_per_sec": 0 00:10:36.801 }, 00:10:36.801 "claimed": true, 00:10:36.801 "claim_type": "exclusive_write", 00:10:36.801 "zoned": false, 00:10:36.801 "supported_io_types": { 00:10:36.801 "read": true, 00:10:36.801 "write": true, 00:10:36.801 "unmap": true, 00:10:36.801 "flush": true, 00:10:36.801 "reset": true, 00:10:36.801 "nvme_admin": false, 00:10:36.801 "nvme_io": false, 00:10:36.801 "nvme_io_md": false, 00:10:36.801 "write_zeroes": true, 00:10:36.801 "zcopy": true, 00:10:36.801 "get_zone_info": false, 00:10:36.801 "zone_management": false, 00:10:36.801 "zone_append": false, 00:10:36.801 "compare": false, 00:10:36.801 "compare_and_write": false, 00:10:36.801 "abort": true, 00:10:36.801 "seek_hole": false, 00:10:36.801 "seek_data": false, 00:10:36.801 "copy": true, 00:10:36.801 "nvme_iov_md": false 00:10:36.801 }, 00:10:36.801 "memory_domains": [ 00:10:36.801 { 00:10:36.801 "dma_device_id": "system", 00:10:36.801 "dma_device_type": 1 00:10:36.801 }, 00:10:36.801 { 00:10:36.801 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:36.801 "dma_device_type": 2 00:10:36.801 } 00:10:36.801 ], 00:10:36.801 "driver_specific": {} 00:10:36.801 } 00:10:36.801 ] 00:10:36.801 23:28:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:36.801 23:28:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:36.802 23:28:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:10:36.802 23:28:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:36.802 23:28:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:36.802 23:28:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:36.802 23:28:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:36.802 23:28:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:36.802 23:28:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:36.802 23:28:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:36.802 23:28:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:36.802 23:28:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:36.802 23:28:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:36.802 23:28:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:36.802 23:28:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:36.802 23:28:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.802 23:28:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:37.061 23:28:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:37.061 "name": "Existed_Raid", 00:10:37.061 "uuid": "d79ccf37-61b0-42b5-9f44-e298ef688138", 00:10:37.061 "strip_size_kb": 0, 00:10:37.061 "state": "online", 00:10:37.061 "raid_level": "raid1", 00:10:37.061 "superblock": false, 00:10:37.061 "num_base_bdevs": 4, 00:10:37.061 "num_base_bdevs_discovered": 4, 00:10:37.061 "num_base_bdevs_operational": 4, 00:10:37.061 "base_bdevs_list": [ 00:10:37.061 { 00:10:37.061 "name": "NewBaseBdev", 00:10:37.061 "uuid": "99b51bbc-a1e8-4fe5-bb54-a4d5f9945805", 00:10:37.061 "is_configured": true, 00:10:37.061 "data_offset": 0, 00:10:37.061 "data_size": 65536 00:10:37.061 }, 00:10:37.061 { 00:10:37.061 "name": "BaseBdev2", 00:10:37.061 "uuid": "8ebc4e89-a525-49ee-b941-13e8f64ed317", 00:10:37.061 "is_configured": true, 00:10:37.061 "data_offset": 0, 00:10:37.061 "data_size": 65536 00:10:37.061 }, 00:10:37.061 { 00:10:37.061 "name": "BaseBdev3", 00:10:37.061 "uuid": "958e9355-0c73-4579-b05c-cbe0a15b1df4", 00:10:37.061 "is_configured": true, 00:10:37.061 "data_offset": 0, 00:10:37.061 "data_size": 65536 00:10:37.061 }, 00:10:37.061 { 00:10:37.061 "name": "BaseBdev4", 00:10:37.061 "uuid": "ca3b03ba-84b8-45b5-aea7-a22f29aa889e", 00:10:37.061 "is_configured": true, 00:10:37.061 "data_offset": 0, 00:10:37.061 "data_size": 65536 00:10:37.061 } 00:10:37.061 ] 00:10:37.061 }' 00:10:37.061 23:28:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:37.061 23:28:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.320 23:28:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:10:37.320 23:28:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:37.320 23:28:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:37.320 23:28:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:37.320 23:28:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:37.320 23:28:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:37.320 23:28:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:37.320 23:28:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:37.320 23:28:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:37.320 23:28:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.320 [2024-09-30 23:28:17.054127] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:37.320 23:28:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:37.320 23:28:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:37.320 "name": "Existed_Raid", 00:10:37.320 "aliases": [ 00:10:37.320 "d79ccf37-61b0-42b5-9f44-e298ef688138" 00:10:37.320 ], 00:10:37.320 "product_name": "Raid Volume", 00:10:37.321 "block_size": 512, 00:10:37.321 "num_blocks": 65536, 00:10:37.321 "uuid": "d79ccf37-61b0-42b5-9f44-e298ef688138", 00:10:37.321 "assigned_rate_limits": { 00:10:37.321 "rw_ios_per_sec": 0, 00:10:37.321 "rw_mbytes_per_sec": 0, 00:10:37.321 "r_mbytes_per_sec": 0, 00:10:37.321 "w_mbytes_per_sec": 0 00:10:37.321 }, 00:10:37.321 "claimed": false, 00:10:37.321 "zoned": false, 00:10:37.321 "supported_io_types": { 00:10:37.321 "read": true, 00:10:37.321 "write": true, 00:10:37.321 "unmap": false, 00:10:37.321 "flush": false, 00:10:37.321 "reset": true, 00:10:37.321 "nvme_admin": false, 00:10:37.321 "nvme_io": false, 00:10:37.321 "nvme_io_md": false, 00:10:37.321 "write_zeroes": true, 00:10:37.321 "zcopy": false, 00:10:37.321 "get_zone_info": false, 00:10:37.321 "zone_management": false, 00:10:37.321 "zone_append": false, 00:10:37.321 "compare": false, 00:10:37.321 "compare_and_write": false, 00:10:37.321 "abort": false, 00:10:37.321 "seek_hole": false, 00:10:37.321 "seek_data": false, 00:10:37.321 "copy": false, 00:10:37.321 "nvme_iov_md": false 00:10:37.321 }, 00:10:37.321 "memory_domains": [ 00:10:37.321 { 00:10:37.321 "dma_device_id": "system", 00:10:37.321 "dma_device_type": 1 00:10:37.321 }, 00:10:37.321 { 00:10:37.321 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:37.321 "dma_device_type": 2 00:10:37.321 }, 00:10:37.321 { 00:10:37.321 "dma_device_id": "system", 00:10:37.321 "dma_device_type": 1 00:10:37.321 }, 00:10:37.321 { 00:10:37.321 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:37.321 "dma_device_type": 2 00:10:37.321 }, 00:10:37.321 { 00:10:37.321 "dma_device_id": "system", 00:10:37.321 "dma_device_type": 1 00:10:37.321 }, 00:10:37.321 { 00:10:37.321 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:37.321 "dma_device_type": 2 00:10:37.321 }, 00:10:37.321 { 00:10:37.321 "dma_device_id": "system", 00:10:37.321 "dma_device_type": 1 00:10:37.321 }, 00:10:37.321 { 00:10:37.321 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:37.321 "dma_device_type": 2 00:10:37.321 } 00:10:37.321 ], 00:10:37.321 "driver_specific": { 00:10:37.321 "raid": { 00:10:37.321 "uuid": "d79ccf37-61b0-42b5-9f44-e298ef688138", 00:10:37.321 "strip_size_kb": 0, 00:10:37.321 "state": "online", 00:10:37.321 "raid_level": "raid1", 00:10:37.321 "superblock": false, 00:10:37.321 "num_base_bdevs": 4, 00:10:37.321 "num_base_bdevs_discovered": 4, 00:10:37.321 "num_base_bdevs_operational": 4, 00:10:37.321 "base_bdevs_list": [ 00:10:37.321 { 00:10:37.321 "name": "NewBaseBdev", 00:10:37.321 "uuid": "99b51bbc-a1e8-4fe5-bb54-a4d5f9945805", 00:10:37.321 "is_configured": true, 00:10:37.321 "data_offset": 0, 00:10:37.321 "data_size": 65536 00:10:37.321 }, 00:10:37.321 { 00:10:37.321 "name": "BaseBdev2", 00:10:37.321 "uuid": "8ebc4e89-a525-49ee-b941-13e8f64ed317", 00:10:37.321 "is_configured": true, 00:10:37.321 "data_offset": 0, 00:10:37.321 "data_size": 65536 00:10:37.321 }, 00:10:37.321 { 00:10:37.321 "name": "BaseBdev3", 00:10:37.321 "uuid": "958e9355-0c73-4579-b05c-cbe0a15b1df4", 00:10:37.321 "is_configured": true, 00:10:37.321 "data_offset": 0, 00:10:37.321 "data_size": 65536 00:10:37.321 }, 00:10:37.321 { 00:10:37.321 "name": "BaseBdev4", 00:10:37.321 "uuid": "ca3b03ba-84b8-45b5-aea7-a22f29aa889e", 00:10:37.321 "is_configured": true, 00:10:37.321 "data_offset": 0, 00:10:37.321 "data_size": 65536 00:10:37.321 } 00:10:37.321 ] 00:10:37.321 } 00:10:37.321 } 00:10:37.321 }' 00:10:37.321 23:28:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:37.321 23:28:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:10:37.321 BaseBdev2 00:10:37.321 BaseBdev3 00:10:37.321 BaseBdev4' 00:10:37.321 23:28:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:37.580 23:28:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:37.580 23:28:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:37.580 23:28:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:37.580 23:28:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:10:37.580 23:28:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:37.580 23:28:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.580 23:28:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:37.580 23:28:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:37.580 23:28:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:37.580 23:28:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:37.580 23:28:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:37.580 23:28:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:37.580 23:28:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.580 23:28:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:37.580 23:28:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:37.580 23:28:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:37.580 23:28:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:37.580 23:28:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:37.580 23:28:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:37.580 23:28:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:37.580 23:28:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.581 23:28:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:37.581 23:28:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:37.581 23:28:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:37.581 23:28:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:37.581 23:28:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:37.581 23:28:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:37.581 23:28:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:37.581 23:28:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:37.581 23:28:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.581 23:28:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:37.581 23:28:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:37.581 23:28:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:37.581 23:28:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:37.581 23:28:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:37.581 23:28:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.581 [2024-09-30 23:28:17.377264] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:37.581 [2024-09-30 23:28:17.377299] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:37.581 [2024-09-30 23:28:17.377379] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:37.581 [2024-09-30 23:28:17.377648] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:37.581 [2024-09-30 23:28:17.377677] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name Existed_Raid, state offline 00:10:37.581 23:28:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:37.581 23:28:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 84020 00:10:37.581 23:28:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 84020 ']' 00:10:37.581 23:28:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 84020 00:10:37.581 23:28:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:10:37.581 23:28:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:37.581 23:28:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 84020 00:10:37.581 23:28:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:37.581 23:28:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:37.581 killing process with pid 84020 00:10:37.581 23:28:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 84020' 00:10:37.581 23:28:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 84020 00:10:37.581 [2024-09-30 23:28:17.422248] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:37.581 23:28:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 84020 00:10:37.840 [2024-09-30 23:28:17.461873] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:38.099 23:28:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:10:38.099 00:10:38.099 real 0m9.718s 00:10:38.099 user 0m16.605s 00:10:38.099 sys 0m2.090s 00:10:38.099 23:28:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:38.099 23:28:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.099 ************************************ 00:10:38.099 END TEST raid_state_function_test 00:10:38.099 ************************************ 00:10:38.099 23:28:17 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 4 true 00:10:38.100 23:28:17 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:10:38.100 23:28:17 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:38.100 23:28:17 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:38.100 ************************************ 00:10:38.100 START TEST raid_state_function_test_sb 00:10:38.100 ************************************ 00:10:38.100 23:28:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 4 true 00:10:38.100 23:28:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:10:38.100 23:28:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:10:38.100 23:28:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:10:38.100 23:28:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:10:38.100 23:28:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:10:38.100 23:28:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:38.100 23:28:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:10:38.100 23:28:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:38.100 23:28:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:38.100 23:28:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:10:38.100 23:28:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:38.100 23:28:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:38.100 23:28:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:10:38.100 23:28:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:38.100 23:28:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:38.100 23:28:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:10:38.100 23:28:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:38.100 23:28:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:38.100 23:28:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:10:38.100 23:28:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:10:38.100 23:28:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:10:38.100 23:28:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:10:38.100 23:28:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:10:38.100 23:28:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:10:38.100 23:28:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:10:38.100 23:28:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:10:38.100 23:28:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:10:38.100 23:28:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:10:38.100 23:28:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=84669 00:10:38.100 23:28:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:38.100 23:28:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 84669' 00:10:38.100 Process raid pid: 84669 00:10:38.100 23:28:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 84669 00:10:38.100 23:28:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 84669 ']' 00:10:38.100 23:28:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:38.100 23:28:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:38.100 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:38.100 23:28:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:38.100 23:28:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:38.100 23:28:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:38.100 [2024-09-30 23:28:17.876163] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:10:38.100 [2024-09-30 23:28:17.876754] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:38.360 [2024-09-30 23:28:18.038158] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:38.360 [2024-09-30 23:28:18.082644] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:38.360 [2024-09-30 23:28:18.125186] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:38.360 [2024-09-30 23:28:18.125244] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:38.929 23:28:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:38.929 23:28:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:10:38.929 23:28:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:38.929 23:28:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:38.929 23:28:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:38.929 [2024-09-30 23:28:18.714669] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:38.929 [2024-09-30 23:28:18.714722] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:38.929 [2024-09-30 23:28:18.714742] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:38.929 [2024-09-30 23:28:18.714753] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:38.929 [2024-09-30 23:28:18.714762] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:38.929 [2024-09-30 23:28:18.714775] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:38.929 [2024-09-30 23:28:18.714781] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:38.929 [2024-09-30 23:28:18.714790] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:38.929 23:28:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:38.929 23:28:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:38.929 23:28:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:38.929 23:28:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:38.929 23:28:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:38.929 23:28:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:38.929 23:28:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:38.929 23:28:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:38.929 23:28:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:38.929 23:28:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:38.929 23:28:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:38.929 23:28:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:38.929 23:28:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:38.929 23:28:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:38.929 23:28:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:38.929 23:28:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:38.929 23:28:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:38.929 "name": "Existed_Raid", 00:10:38.929 "uuid": "06c3bcd9-cda4-400c-a1b4-374f610e6a64", 00:10:38.929 "strip_size_kb": 0, 00:10:38.929 "state": "configuring", 00:10:38.929 "raid_level": "raid1", 00:10:38.929 "superblock": true, 00:10:38.929 "num_base_bdevs": 4, 00:10:38.929 "num_base_bdevs_discovered": 0, 00:10:38.929 "num_base_bdevs_operational": 4, 00:10:38.929 "base_bdevs_list": [ 00:10:38.929 { 00:10:38.929 "name": "BaseBdev1", 00:10:38.929 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:38.929 "is_configured": false, 00:10:38.929 "data_offset": 0, 00:10:38.929 "data_size": 0 00:10:38.929 }, 00:10:38.929 { 00:10:38.929 "name": "BaseBdev2", 00:10:38.929 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:38.929 "is_configured": false, 00:10:38.929 "data_offset": 0, 00:10:38.929 "data_size": 0 00:10:38.929 }, 00:10:38.929 { 00:10:38.929 "name": "BaseBdev3", 00:10:38.929 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:38.929 "is_configured": false, 00:10:38.929 "data_offset": 0, 00:10:38.929 "data_size": 0 00:10:38.929 }, 00:10:38.929 { 00:10:38.929 "name": "BaseBdev4", 00:10:38.929 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:38.929 "is_configured": false, 00:10:38.929 "data_offset": 0, 00:10:38.929 "data_size": 0 00:10:38.929 } 00:10:38.929 ] 00:10:38.929 }' 00:10:38.929 23:28:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:38.929 23:28:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:39.499 23:28:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:39.499 23:28:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:39.499 23:28:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:39.499 [2024-09-30 23:28:19.145864] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:39.499 [2024-09-30 23:28:19.146012] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring 00:10:39.499 23:28:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:39.499 23:28:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:39.499 23:28:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:39.499 23:28:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:39.499 [2024-09-30 23:28:19.153894] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:39.499 [2024-09-30 23:28:19.153935] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:39.499 [2024-09-30 23:28:19.153944] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:39.499 [2024-09-30 23:28:19.153952] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:39.499 [2024-09-30 23:28:19.153958] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:39.499 [2024-09-30 23:28:19.153967] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:39.499 [2024-09-30 23:28:19.153972] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:39.499 [2024-09-30 23:28:19.153980] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:39.499 23:28:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:39.500 23:28:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:39.500 23:28:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:39.500 23:28:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:39.500 [2024-09-30 23:28:19.170878] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:39.500 BaseBdev1 00:10:39.500 23:28:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:39.500 23:28:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:10:39.500 23:28:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:10:39.500 23:28:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:39.500 23:28:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:39.500 23:28:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:39.500 23:28:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:39.500 23:28:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:39.500 23:28:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:39.500 23:28:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:39.500 23:28:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:39.500 23:28:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:39.500 23:28:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:39.500 23:28:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:39.500 [ 00:10:39.500 { 00:10:39.500 "name": "BaseBdev1", 00:10:39.500 "aliases": [ 00:10:39.500 "018899fc-750f-4a37-926d-2a71b184f780" 00:10:39.500 ], 00:10:39.500 "product_name": "Malloc disk", 00:10:39.500 "block_size": 512, 00:10:39.500 "num_blocks": 65536, 00:10:39.500 "uuid": "018899fc-750f-4a37-926d-2a71b184f780", 00:10:39.500 "assigned_rate_limits": { 00:10:39.500 "rw_ios_per_sec": 0, 00:10:39.500 "rw_mbytes_per_sec": 0, 00:10:39.500 "r_mbytes_per_sec": 0, 00:10:39.500 "w_mbytes_per_sec": 0 00:10:39.500 }, 00:10:39.500 "claimed": true, 00:10:39.500 "claim_type": "exclusive_write", 00:10:39.500 "zoned": false, 00:10:39.500 "supported_io_types": { 00:10:39.500 "read": true, 00:10:39.500 "write": true, 00:10:39.500 "unmap": true, 00:10:39.500 "flush": true, 00:10:39.500 "reset": true, 00:10:39.500 "nvme_admin": false, 00:10:39.500 "nvme_io": false, 00:10:39.500 "nvme_io_md": false, 00:10:39.500 "write_zeroes": true, 00:10:39.500 "zcopy": true, 00:10:39.500 "get_zone_info": false, 00:10:39.500 "zone_management": false, 00:10:39.500 "zone_append": false, 00:10:39.500 "compare": false, 00:10:39.500 "compare_and_write": false, 00:10:39.500 "abort": true, 00:10:39.500 "seek_hole": false, 00:10:39.500 "seek_data": false, 00:10:39.500 "copy": true, 00:10:39.500 "nvme_iov_md": false 00:10:39.500 }, 00:10:39.500 "memory_domains": [ 00:10:39.500 { 00:10:39.500 "dma_device_id": "system", 00:10:39.500 "dma_device_type": 1 00:10:39.500 }, 00:10:39.500 { 00:10:39.500 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:39.500 "dma_device_type": 2 00:10:39.500 } 00:10:39.500 ], 00:10:39.500 "driver_specific": {} 00:10:39.500 } 00:10:39.500 ] 00:10:39.500 23:28:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:39.500 23:28:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:39.500 23:28:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:39.500 23:28:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:39.500 23:28:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:39.500 23:28:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:39.500 23:28:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:39.500 23:28:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:39.500 23:28:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:39.500 23:28:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:39.500 23:28:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:39.500 23:28:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:39.500 23:28:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:39.500 23:28:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:39.500 23:28:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:39.500 23:28:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:39.500 23:28:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:39.500 23:28:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:39.500 "name": "Existed_Raid", 00:10:39.500 "uuid": "61c10265-ba6b-4f59-b688-838ed6dd7f1a", 00:10:39.500 "strip_size_kb": 0, 00:10:39.500 "state": "configuring", 00:10:39.500 "raid_level": "raid1", 00:10:39.500 "superblock": true, 00:10:39.500 "num_base_bdevs": 4, 00:10:39.500 "num_base_bdevs_discovered": 1, 00:10:39.500 "num_base_bdevs_operational": 4, 00:10:39.500 "base_bdevs_list": [ 00:10:39.500 { 00:10:39.500 "name": "BaseBdev1", 00:10:39.500 "uuid": "018899fc-750f-4a37-926d-2a71b184f780", 00:10:39.500 "is_configured": true, 00:10:39.500 "data_offset": 2048, 00:10:39.500 "data_size": 63488 00:10:39.500 }, 00:10:39.500 { 00:10:39.500 "name": "BaseBdev2", 00:10:39.500 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:39.500 "is_configured": false, 00:10:39.500 "data_offset": 0, 00:10:39.500 "data_size": 0 00:10:39.500 }, 00:10:39.500 { 00:10:39.500 "name": "BaseBdev3", 00:10:39.500 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:39.500 "is_configured": false, 00:10:39.500 "data_offset": 0, 00:10:39.500 "data_size": 0 00:10:39.500 }, 00:10:39.500 { 00:10:39.500 "name": "BaseBdev4", 00:10:39.500 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:39.500 "is_configured": false, 00:10:39.500 "data_offset": 0, 00:10:39.500 "data_size": 0 00:10:39.500 } 00:10:39.500 ] 00:10:39.500 }' 00:10:39.500 23:28:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:39.500 23:28:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:40.070 23:28:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:40.070 23:28:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.070 23:28:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:40.070 [2024-09-30 23:28:19.674086] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:40.070 [2024-09-30 23:28:19.674242] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring 00:10:40.070 23:28:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:40.070 23:28:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:40.070 23:28:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.070 23:28:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:40.070 [2024-09-30 23:28:19.682082] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:40.070 [2024-09-30 23:28:19.684023] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:40.070 [2024-09-30 23:28:19.684103] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:40.070 [2024-09-30 23:28:19.684131] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:40.070 [2024-09-30 23:28:19.684153] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:40.070 [2024-09-30 23:28:19.684172] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:40.070 [2024-09-30 23:28:19.684192] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:40.070 23:28:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:40.070 23:28:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:10:40.070 23:28:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:40.070 23:28:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:40.070 23:28:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:40.070 23:28:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:40.070 23:28:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:40.070 23:28:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:40.070 23:28:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:40.070 23:28:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:40.070 23:28:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:40.070 23:28:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:40.070 23:28:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:40.070 23:28:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:40.070 23:28:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:40.070 23:28:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.070 23:28:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:40.070 23:28:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:40.070 23:28:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:40.070 "name": "Existed_Raid", 00:10:40.070 "uuid": "a5599104-6c78-48e9-945d-4debcce6ac03", 00:10:40.070 "strip_size_kb": 0, 00:10:40.070 "state": "configuring", 00:10:40.070 "raid_level": "raid1", 00:10:40.070 "superblock": true, 00:10:40.070 "num_base_bdevs": 4, 00:10:40.070 "num_base_bdevs_discovered": 1, 00:10:40.070 "num_base_bdevs_operational": 4, 00:10:40.070 "base_bdevs_list": [ 00:10:40.070 { 00:10:40.070 "name": "BaseBdev1", 00:10:40.070 "uuid": "018899fc-750f-4a37-926d-2a71b184f780", 00:10:40.070 "is_configured": true, 00:10:40.070 "data_offset": 2048, 00:10:40.070 "data_size": 63488 00:10:40.070 }, 00:10:40.070 { 00:10:40.070 "name": "BaseBdev2", 00:10:40.070 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:40.070 "is_configured": false, 00:10:40.070 "data_offset": 0, 00:10:40.070 "data_size": 0 00:10:40.070 }, 00:10:40.070 { 00:10:40.070 "name": "BaseBdev3", 00:10:40.070 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:40.070 "is_configured": false, 00:10:40.070 "data_offset": 0, 00:10:40.070 "data_size": 0 00:10:40.070 }, 00:10:40.070 { 00:10:40.070 "name": "BaseBdev4", 00:10:40.070 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:40.070 "is_configured": false, 00:10:40.070 "data_offset": 0, 00:10:40.070 "data_size": 0 00:10:40.070 } 00:10:40.070 ] 00:10:40.070 }' 00:10:40.070 23:28:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:40.070 23:28:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:40.330 23:28:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:40.330 23:28:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.330 23:28:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:40.330 [2024-09-30 23:28:20.161544] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:40.330 BaseBdev2 00:10:40.330 23:28:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:40.330 23:28:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:10:40.330 23:28:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:10:40.330 23:28:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:40.330 23:28:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:40.330 23:28:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:40.330 23:28:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:40.330 23:28:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:40.330 23:28:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.330 23:28:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:40.330 23:28:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:40.330 23:28:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:40.330 23:28:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.330 23:28:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:40.590 [ 00:10:40.590 { 00:10:40.590 "name": "BaseBdev2", 00:10:40.590 "aliases": [ 00:10:40.590 "4eb610a6-ecfc-4ca9-8a84-fd631c75a988" 00:10:40.590 ], 00:10:40.590 "product_name": "Malloc disk", 00:10:40.590 "block_size": 512, 00:10:40.590 "num_blocks": 65536, 00:10:40.590 "uuid": "4eb610a6-ecfc-4ca9-8a84-fd631c75a988", 00:10:40.590 "assigned_rate_limits": { 00:10:40.590 "rw_ios_per_sec": 0, 00:10:40.590 "rw_mbytes_per_sec": 0, 00:10:40.590 "r_mbytes_per_sec": 0, 00:10:40.590 "w_mbytes_per_sec": 0 00:10:40.590 }, 00:10:40.590 "claimed": true, 00:10:40.590 "claim_type": "exclusive_write", 00:10:40.590 "zoned": false, 00:10:40.590 "supported_io_types": { 00:10:40.590 "read": true, 00:10:40.590 "write": true, 00:10:40.590 "unmap": true, 00:10:40.590 "flush": true, 00:10:40.590 "reset": true, 00:10:40.590 "nvme_admin": false, 00:10:40.590 "nvme_io": false, 00:10:40.590 "nvme_io_md": false, 00:10:40.590 "write_zeroes": true, 00:10:40.590 "zcopy": true, 00:10:40.590 "get_zone_info": false, 00:10:40.590 "zone_management": false, 00:10:40.590 "zone_append": false, 00:10:40.590 "compare": false, 00:10:40.590 "compare_and_write": false, 00:10:40.590 "abort": true, 00:10:40.590 "seek_hole": false, 00:10:40.590 "seek_data": false, 00:10:40.590 "copy": true, 00:10:40.590 "nvme_iov_md": false 00:10:40.590 }, 00:10:40.590 "memory_domains": [ 00:10:40.590 { 00:10:40.590 "dma_device_id": "system", 00:10:40.590 "dma_device_type": 1 00:10:40.590 }, 00:10:40.590 { 00:10:40.590 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:40.590 "dma_device_type": 2 00:10:40.590 } 00:10:40.590 ], 00:10:40.590 "driver_specific": {} 00:10:40.590 } 00:10:40.590 ] 00:10:40.590 23:28:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:40.590 23:28:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:40.590 23:28:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:40.590 23:28:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:40.590 23:28:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:40.590 23:28:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:40.590 23:28:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:40.590 23:28:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:40.590 23:28:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:40.590 23:28:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:40.590 23:28:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:40.590 23:28:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:40.590 23:28:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:40.590 23:28:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:40.590 23:28:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:40.590 23:28:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.590 23:28:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:40.590 23:28:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:40.590 23:28:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:40.590 23:28:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:40.590 "name": "Existed_Raid", 00:10:40.590 "uuid": "a5599104-6c78-48e9-945d-4debcce6ac03", 00:10:40.590 "strip_size_kb": 0, 00:10:40.590 "state": "configuring", 00:10:40.590 "raid_level": "raid1", 00:10:40.590 "superblock": true, 00:10:40.590 "num_base_bdevs": 4, 00:10:40.590 "num_base_bdevs_discovered": 2, 00:10:40.590 "num_base_bdevs_operational": 4, 00:10:40.590 "base_bdevs_list": [ 00:10:40.590 { 00:10:40.590 "name": "BaseBdev1", 00:10:40.590 "uuid": "018899fc-750f-4a37-926d-2a71b184f780", 00:10:40.590 "is_configured": true, 00:10:40.590 "data_offset": 2048, 00:10:40.590 "data_size": 63488 00:10:40.590 }, 00:10:40.590 { 00:10:40.590 "name": "BaseBdev2", 00:10:40.590 "uuid": "4eb610a6-ecfc-4ca9-8a84-fd631c75a988", 00:10:40.590 "is_configured": true, 00:10:40.590 "data_offset": 2048, 00:10:40.590 "data_size": 63488 00:10:40.590 }, 00:10:40.590 { 00:10:40.590 "name": "BaseBdev3", 00:10:40.590 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:40.590 "is_configured": false, 00:10:40.590 "data_offset": 0, 00:10:40.590 "data_size": 0 00:10:40.590 }, 00:10:40.590 { 00:10:40.590 "name": "BaseBdev4", 00:10:40.590 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:40.590 "is_configured": false, 00:10:40.590 "data_offset": 0, 00:10:40.590 "data_size": 0 00:10:40.590 } 00:10:40.590 ] 00:10:40.590 }' 00:10:40.590 23:28:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:40.590 23:28:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:40.854 23:28:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:40.854 23:28:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.854 23:28:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:40.854 [2024-09-30 23:28:20.587953] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:40.854 BaseBdev3 00:10:40.854 23:28:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:40.854 23:28:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:10:40.854 23:28:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:10:40.854 23:28:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:40.854 23:28:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:40.854 23:28:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:40.854 23:28:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:40.854 23:28:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:40.854 23:28:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.854 23:28:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:40.854 23:28:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:40.854 23:28:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:40.854 23:28:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.854 23:28:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:40.854 [ 00:10:40.854 { 00:10:40.854 "name": "BaseBdev3", 00:10:40.854 "aliases": [ 00:10:40.854 "bfcaf774-600d-4e26-baa7-610a0e6f7d70" 00:10:40.854 ], 00:10:40.854 "product_name": "Malloc disk", 00:10:40.854 "block_size": 512, 00:10:40.854 "num_blocks": 65536, 00:10:40.854 "uuid": "bfcaf774-600d-4e26-baa7-610a0e6f7d70", 00:10:40.854 "assigned_rate_limits": { 00:10:40.854 "rw_ios_per_sec": 0, 00:10:40.854 "rw_mbytes_per_sec": 0, 00:10:40.854 "r_mbytes_per_sec": 0, 00:10:40.854 "w_mbytes_per_sec": 0 00:10:40.854 }, 00:10:40.854 "claimed": true, 00:10:40.854 "claim_type": "exclusive_write", 00:10:40.854 "zoned": false, 00:10:40.854 "supported_io_types": { 00:10:40.854 "read": true, 00:10:40.854 "write": true, 00:10:40.854 "unmap": true, 00:10:40.854 "flush": true, 00:10:40.854 "reset": true, 00:10:40.854 "nvme_admin": false, 00:10:40.854 "nvme_io": false, 00:10:40.854 "nvme_io_md": false, 00:10:40.854 "write_zeroes": true, 00:10:40.854 "zcopy": true, 00:10:40.854 "get_zone_info": false, 00:10:40.854 "zone_management": false, 00:10:40.854 "zone_append": false, 00:10:40.854 "compare": false, 00:10:40.854 "compare_and_write": false, 00:10:40.854 "abort": true, 00:10:40.854 "seek_hole": false, 00:10:40.854 "seek_data": false, 00:10:40.854 "copy": true, 00:10:40.854 "nvme_iov_md": false 00:10:40.854 }, 00:10:40.854 "memory_domains": [ 00:10:40.854 { 00:10:40.854 "dma_device_id": "system", 00:10:40.854 "dma_device_type": 1 00:10:40.854 }, 00:10:40.854 { 00:10:40.854 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:40.854 "dma_device_type": 2 00:10:40.854 } 00:10:40.854 ], 00:10:40.854 "driver_specific": {} 00:10:40.854 } 00:10:40.854 ] 00:10:40.854 23:28:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:40.854 23:28:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:40.854 23:28:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:40.854 23:28:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:40.854 23:28:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:40.855 23:28:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:40.855 23:28:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:40.855 23:28:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:40.855 23:28:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:40.855 23:28:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:40.855 23:28:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:40.855 23:28:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:40.855 23:28:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:40.855 23:28:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:40.855 23:28:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:40.855 23:28:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:40.855 23:28:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.855 23:28:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:40.855 23:28:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:40.855 23:28:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:40.855 "name": "Existed_Raid", 00:10:40.855 "uuid": "a5599104-6c78-48e9-945d-4debcce6ac03", 00:10:40.855 "strip_size_kb": 0, 00:10:40.855 "state": "configuring", 00:10:40.855 "raid_level": "raid1", 00:10:40.855 "superblock": true, 00:10:40.855 "num_base_bdevs": 4, 00:10:40.855 "num_base_bdevs_discovered": 3, 00:10:40.855 "num_base_bdevs_operational": 4, 00:10:40.855 "base_bdevs_list": [ 00:10:40.855 { 00:10:40.855 "name": "BaseBdev1", 00:10:40.855 "uuid": "018899fc-750f-4a37-926d-2a71b184f780", 00:10:40.855 "is_configured": true, 00:10:40.855 "data_offset": 2048, 00:10:40.855 "data_size": 63488 00:10:40.855 }, 00:10:40.855 { 00:10:40.855 "name": "BaseBdev2", 00:10:40.855 "uuid": "4eb610a6-ecfc-4ca9-8a84-fd631c75a988", 00:10:40.855 "is_configured": true, 00:10:40.855 "data_offset": 2048, 00:10:40.855 "data_size": 63488 00:10:40.855 }, 00:10:40.855 { 00:10:40.855 "name": "BaseBdev3", 00:10:40.855 "uuid": "bfcaf774-600d-4e26-baa7-610a0e6f7d70", 00:10:40.855 "is_configured": true, 00:10:40.855 "data_offset": 2048, 00:10:40.855 "data_size": 63488 00:10:40.855 }, 00:10:40.855 { 00:10:40.855 "name": "BaseBdev4", 00:10:40.855 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:40.855 "is_configured": false, 00:10:40.855 "data_offset": 0, 00:10:40.855 "data_size": 0 00:10:40.855 } 00:10:40.855 ] 00:10:40.855 }' 00:10:40.855 23:28:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:40.855 23:28:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:41.425 23:28:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:41.425 23:28:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:41.425 23:28:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:41.425 [2024-09-30 23:28:21.038309] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:41.425 [2024-09-30 23:28:21.038610] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:10:41.425 [2024-09-30 23:28:21.038663] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:41.425 [2024-09-30 23:28:21.038967] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:10:41.425 [2024-09-30 23:28:21.039179] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:10:41.425 [2024-09-30 23:28:21.039229] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raBaseBdev4 00:10:41.425 id_bdev 0x617000006980 00:10:41.425 [2024-09-30 23:28:21.039399] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:41.425 23:28:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:41.425 23:28:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:10:41.425 23:28:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:10:41.425 23:28:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:41.425 23:28:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:41.425 23:28:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:41.425 23:28:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:41.425 23:28:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:41.425 23:28:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:41.425 23:28:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:41.425 23:28:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:41.425 23:28:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:41.425 23:28:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:41.425 23:28:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:41.425 [ 00:10:41.425 { 00:10:41.425 "name": "BaseBdev4", 00:10:41.425 "aliases": [ 00:10:41.425 "33991996-d856-4dae-aee2-5edfd9a2e55b" 00:10:41.425 ], 00:10:41.425 "product_name": "Malloc disk", 00:10:41.425 "block_size": 512, 00:10:41.425 "num_blocks": 65536, 00:10:41.425 "uuid": "33991996-d856-4dae-aee2-5edfd9a2e55b", 00:10:41.425 "assigned_rate_limits": { 00:10:41.425 "rw_ios_per_sec": 0, 00:10:41.425 "rw_mbytes_per_sec": 0, 00:10:41.425 "r_mbytes_per_sec": 0, 00:10:41.425 "w_mbytes_per_sec": 0 00:10:41.425 }, 00:10:41.425 "claimed": true, 00:10:41.425 "claim_type": "exclusive_write", 00:10:41.425 "zoned": false, 00:10:41.425 "supported_io_types": { 00:10:41.425 "read": true, 00:10:41.425 "write": true, 00:10:41.425 "unmap": true, 00:10:41.425 "flush": true, 00:10:41.425 "reset": true, 00:10:41.425 "nvme_admin": false, 00:10:41.425 "nvme_io": false, 00:10:41.425 "nvme_io_md": false, 00:10:41.425 "write_zeroes": true, 00:10:41.425 "zcopy": true, 00:10:41.425 "get_zone_info": false, 00:10:41.425 "zone_management": false, 00:10:41.425 "zone_append": false, 00:10:41.425 "compare": false, 00:10:41.425 "compare_and_write": false, 00:10:41.425 "abort": true, 00:10:41.425 "seek_hole": false, 00:10:41.425 "seek_data": false, 00:10:41.425 "copy": true, 00:10:41.425 "nvme_iov_md": false 00:10:41.425 }, 00:10:41.425 "memory_domains": [ 00:10:41.425 { 00:10:41.425 "dma_device_id": "system", 00:10:41.425 "dma_device_type": 1 00:10:41.426 }, 00:10:41.426 { 00:10:41.426 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:41.426 "dma_device_type": 2 00:10:41.426 } 00:10:41.426 ], 00:10:41.426 "driver_specific": {} 00:10:41.426 } 00:10:41.426 ] 00:10:41.426 23:28:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:41.426 23:28:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:41.426 23:28:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:41.426 23:28:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:41.426 23:28:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:10:41.426 23:28:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:41.426 23:28:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:41.426 23:28:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:41.426 23:28:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:41.426 23:28:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:41.426 23:28:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:41.426 23:28:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:41.426 23:28:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:41.426 23:28:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:41.426 23:28:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:41.426 23:28:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:41.426 23:28:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:41.426 23:28:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:41.426 23:28:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:41.426 23:28:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:41.426 "name": "Existed_Raid", 00:10:41.426 "uuid": "a5599104-6c78-48e9-945d-4debcce6ac03", 00:10:41.426 "strip_size_kb": 0, 00:10:41.426 "state": "online", 00:10:41.426 "raid_level": "raid1", 00:10:41.426 "superblock": true, 00:10:41.426 "num_base_bdevs": 4, 00:10:41.426 "num_base_bdevs_discovered": 4, 00:10:41.426 "num_base_bdevs_operational": 4, 00:10:41.426 "base_bdevs_list": [ 00:10:41.426 { 00:10:41.426 "name": "BaseBdev1", 00:10:41.426 "uuid": "018899fc-750f-4a37-926d-2a71b184f780", 00:10:41.426 "is_configured": true, 00:10:41.426 "data_offset": 2048, 00:10:41.426 "data_size": 63488 00:10:41.426 }, 00:10:41.426 { 00:10:41.426 "name": "BaseBdev2", 00:10:41.426 "uuid": "4eb610a6-ecfc-4ca9-8a84-fd631c75a988", 00:10:41.426 "is_configured": true, 00:10:41.426 "data_offset": 2048, 00:10:41.426 "data_size": 63488 00:10:41.426 }, 00:10:41.426 { 00:10:41.426 "name": "BaseBdev3", 00:10:41.426 "uuid": "bfcaf774-600d-4e26-baa7-610a0e6f7d70", 00:10:41.426 "is_configured": true, 00:10:41.426 "data_offset": 2048, 00:10:41.426 "data_size": 63488 00:10:41.426 }, 00:10:41.426 { 00:10:41.426 "name": "BaseBdev4", 00:10:41.426 "uuid": "33991996-d856-4dae-aee2-5edfd9a2e55b", 00:10:41.426 "is_configured": true, 00:10:41.426 "data_offset": 2048, 00:10:41.426 "data_size": 63488 00:10:41.426 } 00:10:41.426 ] 00:10:41.426 }' 00:10:41.426 23:28:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:41.426 23:28:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:41.686 23:28:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:10:41.686 23:28:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:41.686 23:28:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:41.686 23:28:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:41.686 23:28:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:10:41.686 23:28:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:41.686 23:28:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:41.686 23:28:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:41.686 23:28:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:41.686 23:28:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:41.686 [2024-09-30 23:28:21.497965] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:41.686 23:28:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:41.946 23:28:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:41.946 "name": "Existed_Raid", 00:10:41.946 "aliases": [ 00:10:41.946 "a5599104-6c78-48e9-945d-4debcce6ac03" 00:10:41.946 ], 00:10:41.946 "product_name": "Raid Volume", 00:10:41.946 "block_size": 512, 00:10:41.946 "num_blocks": 63488, 00:10:41.946 "uuid": "a5599104-6c78-48e9-945d-4debcce6ac03", 00:10:41.946 "assigned_rate_limits": { 00:10:41.946 "rw_ios_per_sec": 0, 00:10:41.946 "rw_mbytes_per_sec": 0, 00:10:41.946 "r_mbytes_per_sec": 0, 00:10:41.946 "w_mbytes_per_sec": 0 00:10:41.946 }, 00:10:41.946 "claimed": false, 00:10:41.946 "zoned": false, 00:10:41.946 "supported_io_types": { 00:10:41.946 "read": true, 00:10:41.946 "write": true, 00:10:41.946 "unmap": false, 00:10:41.946 "flush": false, 00:10:41.946 "reset": true, 00:10:41.946 "nvme_admin": false, 00:10:41.946 "nvme_io": false, 00:10:41.946 "nvme_io_md": false, 00:10:41.946 "write_zeroes": true, 00:10:41.946 "zcopy": false, 00:10:41.946 "get_zone_info": false, 00:10:41.946 "zone_management": false, 00:10:41.946 "zone_append": false, 00:10:41.946 "compare": false, 00:10:41.946 "compare_and_write": false, 00:10:41.946 "abort": false, 00:10:41.946 "seek_hole": false, 00:10:41.946 "seek_data": false, 00:10:41.946 "copy": false, 00:10:41.946 "nvme_iov_md": false 00:10:41.946 }, 00:10:41.946 "memory_domains": [ 00:10:41.946 { 00:10:41.946 "dma_device_id": "system", 00:10:41.946 "dma_device_type": 1 00:10:41.946 }, 00:10:41.947 { 00:10:41.947 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:41.947 "dma_device_type": 2 00:10:41.947 }, 00:10:41.947 { 00:10:41.947 "dma_device_id": "system", 00:10:41.947 "dma_device_type": 1 00:10:41.947 }, 00:10:41.947 { 00:10:41.947 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:41.947 "dma_device_type": 2 00:10:41.947 }, 00:10:41.947 { 00:10:41.947 "dma_device_id": "system", 00:10:41.947 "dma_device_type": 1 00:10:41.947 }, 00:10:41.947 { 00:10:41.947 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:41.947 "dma_device_type": 2 00:10:41.947 }, 00:10:41.947 { 00:10:41.947 "dma_device_id": "system", 00:10:41.947 "dma_device_type": 1 00:10:41.947 }, 00:10:41.947 { 00:10:41.947 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:41.947 "dma_device_type": 2 00:10:41.947 } 00:10:41.947 ], 00:10:41.947 "driver_specific": { 00:10:41.947 "raid": { 00:10:41.947 "uuid": "a5599104-6c78-48e9-945d-4debcce6ac03", 00:10:41.947 "strip_size_kb": 0, 00:10:41.947 "state": "online", 00:10:41.947 "raid_level": "raid1", 00:10:41.947 "superblock": true, 00:10:41.947 "num_base_bdevs": 4, 00:10:41.947 "num_base_bdevs_discovered": 4, 00:10:41.947 "num_base_bdevs_operational": 4, 00:10:41.947 "base_bdevs_list": [ 00:10:41.947 { 00:10:41.947 "name": "BaseBdev1", 00:10:41.947 "uuid": "018899fc-750f-4a37-926d-2a71b184f780", 00:10:41.947 "is_configured": true, 00:10:41.947 "data_offset": 2048, 00:10:41.947 "data_size": 63488 00:10:41.947 }, 00:10:41.947 { 00:10:41.947 "name": "BaseBdev2", 00:10:41.947 "uuid": "4eb610a6-ecfc-4ca9-8a84-fd631c75a988", 00:10:41.947 "is_configured": true, 00:10:41.947 "data_offset": 2048, 00:10:41.947 "data_size": 63488 00:10:41.947 }, 00:10:41.947 { 00:10:41.947 "name": "BaseBdev3", 00:10:41.947 "uuid": "bfcaf774-600d-4e26-baa7-610a0e6f7d70", 00:10:41.947 "is_configured": true, 00:10:41.947 "data_offset": 2048, 00:10:41.947 "data_size": 63488 00:10:41.947 }, 00:10:41.947 { 00:10:41.947 "name": "BaseBdev4", 00:10:41.947 "uuid": "33991996-d856-4dae-aee2-5edfd9a2e55b", 00:10:41.947 "is_configured": true, 00:10:41.947 "data_offset": 2048, 00:10:41.947 "data_size": 63488 00:10:41.947 } 00:10:41.947 ] 00:10:41.947 } 00:10:41.947 } 00:10:41.947 }' 00:10:41.947 23:28:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:41.947 23:28:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:10:41.947 BaseBdev2 00:10:41.947 BaseBdev3 00:10:41.947 BaseBdev4' 00:10:41.947 23:28:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:41.947 23:28:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:41.947 23:28:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:41.947 23:28:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:41.947 23:28:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:10:41.947 23:28:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:41.947 23:28:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:41.947 23:28:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:41.947 23:28:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:41.947 23:28:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:41.947 23:28:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:41.947 23:28:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:41.947 23:28:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:41.947 23:28:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:41.947 23:28:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:41.947 23:28:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:41.947 23:28:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:41.947 23:28:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:41.947 23:28:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:41.947 23:28:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:41.947 23:28:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:41.947 23:28:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:41.947 23:28:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:41.947 23:28:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:41.947 23:28:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:41.947 23:28:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:41.947 23:28:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:41.947 23:28:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:41.947 23:28:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:41.947 23:28:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:41.947 23:28:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:41.947 23:28:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:41.947 23:28:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:41.947 23:28:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:41.947 23:28:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:41.947 23:28:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:41.947 23:28:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:42.207 [2024-09-30 23:28:21.805220] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:42.207 23:28:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.207 23:28:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:10:42.207 23:28:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:10:42.207 23:28:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:42.207 23:28:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:10:42.207 23:28:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:10:42.207 23:28:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:10:42.207 23:28:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:42.207 23:28:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:42.207 23:28:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:42.207 23:28:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:42.207 23:28:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:42.207 23:28:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:42.207 23:28:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:42.207 23:28:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:42.207 23:28:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:42.207 23:28:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:42.207 23:28:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:42.207 23:28:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.207 23:28:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:42.207 23:28:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.207 23:28:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:42.207 "name": "Existed_Raid", 00:10:42.207 "uuid": "a5599104-6c78-48e9-945d-4debcce6ac03", 00:10:42.207 "strip_size_kb": 0, 00:10:42.207 "state": "online", 00:10:42.207 "raid_level": "raid1", 00:10:42.207 "superblock": true, 00:10:42.207 "num_base_bdevs": 4, 00:10:42.207 "num_base_bdevs_discovered": 3, 00:10:42.207 "num_base_bdevs_operational": 3, 00:10:42.207 "base_bdevs_list": [ 00:10:42.207 { 00:10:42.207 "name": null, 00:10:42.207 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:42.207 "is_configured": false, 00:10:42.207 "data_offset": 0, 00:10:42.207 "data_size": 63488 00:10:42.207 }, 00:10:42.207 { 00:10:42.207 "name": "BaseBdev2", 00:10:42.207 "uuid": "4eb610a6-ecfc-4ca9-8a84-fd631c75a988", 00:10:42.207 "is_configured": true, 00:10:42.207 "data_offset": 2048, 00:10:42.207 "data_size": 63488 00:10:42.207 }, 00:10:42.207 { 00:10:42.207 "name": "BaseBdev3", 00:10:42.207 "uuid": "bfcaf774-600d-4e26-baa7-610a0e6f7d70", 00:10:42.207 "is_configured": true, 00:10:42.207 "data_offset": 2048, 00:10:42.207 "data_size": 63488 00:10:42.207 }, 00:10:42.207 { 00:10:42.207 "name": "BaseBdev4", 00:10:42.207 "uuid": "33991996-d856-4dae-aee2-5edfd9a2e55b", 00:10:42.207 "is_configured": true, 00:10:42.207 "data_offset": 2048, 00:10:42.207 "data_size": 63488 00:10:42.207 } 00:10:42.207 ] 00:10:42.207 }' 00:10:42.207 23:28:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:42.207 23:28:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:42.467 23:28:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:10:42.467 23:28:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:42.467 23:28:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:42.467 23:28:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.467 23:28:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:42.467 23:28:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:42.467 23:28:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.727 23:28:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:42.727 23:28:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:42.727 23:28:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:10:42.727 23:28:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.727 23:28:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:42.727 [2024-09-30 23:28:22.330052] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:42.727 23:28:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.727 23:28:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:42.727 23:28:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:42.727 23:28:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:42.727 23:28:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:42.727 23:28:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.727 23:28:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:42.727 23:28:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.727 23:28:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:42.727 23:28:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:42.727 23:28:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:10:42.727 23:28:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.727 23:28:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:42.727 [2024-09-30 23:28:22.407457] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:42.727 23:28:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.727 23:28:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:42.727 23:28:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:42.727 23:28:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:42.727 23:28:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:42.727 23:28:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.727 23:28:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:42.727 23:28:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.727 23:28:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:42.727 23:28:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:42.727 23:28:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:10:42.727 23:28:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.727 23:28:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:42.727 [2024-09-30 23:28:22.488881] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:10:42.727 [2024-09-30 23:28:22.489011] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:42.727 [2024-09-30 23:28:22.510262] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:42.727 [2024-09-30 23:28:22.510324] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:42.727 [2024-09-30 23:28:22.510339] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline 00:10:42.727 23:28:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.727 23:28:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:42.727 23:28:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:42.727 23:28:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:42.728 23:28:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.728 23:28:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:10:42.728 23:28:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:42.728 23:28:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.728 23:28:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:10:42.728 23:28:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:10:42.728 23:28:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:10:42.728 23:28:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:10:42.728 23:28:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:42.728 23:28:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:42.728 23:28:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.728 23:28:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:42.988 BaseBdev2 00:10:42.988 23:28:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.988 23:28:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:10:42.988 23:28:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:10:42.988 23:28:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:42.988 23:28:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:42.988 23:28:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:42.988 23:28:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:42.988 23:28:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:42.988 23:28:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.988 23:28:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:42.988 23:28:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.988 23:28:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:42.988 23:28:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.988 23:28:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:42.988 [ 00:10:42.988 { 00:10:42.988 "name": "BaseBdev2", 00:10:42.988 "aliases": [ 00:10:42.988 "e6c4b126-13e9-4b48-a23d-1c4d275e131b" 00:10:42.988 ], 00:10:42.988 "product_name": "Malloc disk", 00:10:42.988 "block_size": 512, 00:10:42.988 "num_blocks": 65536, 00:10:42.988 "uuid": "e6c4b126-13e9-4b48-a23d-1c4d275e131b", 00:10:42.988 "assigned_rate_limits": { 00:10:42.988 "rw_ios_per_sec": 0, 00:10:42.988 "rw_mbytes_per_sec": 0, 00:10:42.988 "r_mbytes_per_sec": 0, 00:10:42.988 "w_mbytes_per_sec": 0 00:10:42.988 }, 00:10:42.988 "claimed": false, 00:10:42.988 "zoned": false, 00:10:42.988 "supported_io_types": { 00:10:42.988 "read": true, 00:10:42.988 "write": true, 00:10:42.988 "unmap": true, 00:10:42.988 "flush": true, 00:10:42.988 "reset": true, 00:10:42.988 "nvme_admin": false, 00:10:42.988 "nvme_io": false, 00:10:42.988 "nvme_io_md": false, 00:10:42.988 "write_zeroes": true, 00:10:42.988 "zcopy": true, 00:10:42.988 "get_zone_info": false, 00:10:42.988 "zone_management": false, 00:10:42.988 "zone_append": false, 00:10:42.988 "compare": false, 00:10:42.988 "compare_and_write": false, 00:10:42.988 "abort": true, 00:10:42.988 "seek_hole": false, 00:10:42.988 "seek_data": false, 00:10:42.988 "copy": true, 00:10:42.988 "nvme_iov_md": false 00:10:42.988 }, 00:10:42.988 "memory_domains": [ 00:10:42.988 { 00:10:42.988 "dma_device_id": "system", 00:10:42.988 "dma_device_type": 1 00:10:42.988 }, 00:10:42.988 { 00:10:42.988 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:42.988 "dma_device_type": 2 00:10:42.988 } 00:10:42.988 ], 00:10:42.988 "driver_specific": {} 00:10:42.988 } 00:10:42.988 ] 00:10:42.988 23:28:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.988 23:28:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:42.988 23:28:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:42.988 23:28:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:42.988 23:28:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:42.988 23:28:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.988 23:28:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:42.988 BaseBdev3 00:10:42.988 23:28:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.988 23:28:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:10:42.988 23:28:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:10:42.988 23:28:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:42.988 23:28:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:42.988 23:28:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:42.988 23:28:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:42.988 23:28:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:42.988 23:28:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.988 23:28:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:42.988 23:28:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.988 23:28:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:42.988 23:28:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.988 23:28:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:42.988 [ 00:10:42.988 { 00:10:42.988 "name": "BaseBdev3", 00:10:42.988 "aliases": [ 00:10:42.988 "aff4d175-566f-4295-99eb-de31db3bac81" 00:10:42.988 ], 00:10:42.988 "product_name": "Malloc disk", 00:10:42.988 "block_size": 512, 00:10:42.988 "num_blocks": 65536, 00:10:42.988 "uuid": "aff4d175-566f-4295-99eb-de31db3bac81", 00:10:42.988 "assigned_rate_limits": { 00:10:42.988 "rw_ios_per_sec": 0, 00:10:42.988 "rw_mbytes_per_sec": 0, 00:10:42.988 "r_mbytes_per_sec": 0, 00:10:42.988 "w_mbytes_per_sec": 0 00:10:42.988 }, 00:10:42.988 "claimed": false, 00:10:42.988 "zoned": false, 00:10:42.988 "supported_io_types": { 00:10:42.988 "read": true, 00:10:42.988 "write": true, 00:10:42.988 "unmap": true, 00:10:42.988 "flush": true, 00:10:42.988 "reset": true, 00:10:42.988 "nvme_admin": false, 00:10:42.988 "nvme_io": false, 00:10:42.988 "nvme_io_md": false, 00:10:42.988 "write_zeroes": true, 00:10:42.988 "zcopy": true, 00:10:42.988 "get_zone_info": false, 00:10:42.988 "zone_management": false, 00:10:42.988 "zone_append": false, 00:10:42.988 "compare": false, 00:10:42.988 "compare_and_write": false, 00:10:42.988 "abort": true, 00:10:42.988 "seek_hole": false, 00:10:42.988 "seek_data": false, 00:10:42.988 "copy": true, 00:10:42.988 "nvme_iov_md": false 00:10:42.988 }, 00:10:42.988 "memory_domains": [ 00:10:42.988 { 00:10:42.988 "dma_device_id": "system", 00:10:42.988 "dma_device_type": 1 00:10:42.988 }, 00:10:42.988 { 00:10:42.988 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:42.988 "dma_device_type": 2 00:10:42.989 } 00:10:42.989 ], 00:10:42.989 "driver_specific": {} 00:10:42.989 } 00:10:42.989 ] 00:10:42.989 23:28:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.989 23:28:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:42.989 23:28:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:42.989 23:28:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:42.989 23:28:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:42.989 23:28:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.989 23:28:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:42.989 BaseBdev4 00:10:42.989 23:28:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.989 23:28:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:10:42.989 23:28:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:10:42.989 23:28:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:42.989 23:28:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:42.989 23:28:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:42.989 23:28:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:42.989 23:28:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:42.989 23:28:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.989 23:28:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:42.989 23:28:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.989 23:28:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:42.989 23:28:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.989 23:28:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:42.989 [ 00:10:42.989 { 00:10:42.989 "name": "BaseBdev4", 00:10:42.989 "aliases": [ 00:10:42.989 "b0cae7fd-b094-4395-8aac-8e9ad1f9a751" 00:10:42.989 ], 00:10:42.989 "product_name": "Malloc disk", 00:10:42.989 "block_size": 512, 00:10:42.989 "num_blocks": 65536, 00:10:42.989 "uuid": "b0cae7fd-b094-4395-8aac-8e9ad1f9a751", 00:10:42.989 "assigned_rate_limits": { 00:10:42.989 "rw_ios_per_sec": 0, 00:10:42.989 "rw_mbytes_per_sec": 0, 00:10:42.989 "r_mbytes_per_sec": 0, 00:10:42.989 "w_mbytes_per_sec": 0 00:10:42.989 }, 00:10:42.989 "claimed": false, 00:10:42.989 "zoned": false, 00:10:42.989 "supported_io_types": { 00:10:42.989 "read": true, 00:10:42.989 "write": true, 00:10:42.989 "unmap": true, 00:10:42.989 "flush": true, 00:10:42.989 "reset": true, 00:10:42.989 "nvme_admin": false, 00:10:42.989 "nvme_io": false, 00:10:42.989 "nvme_io_md": false, 00:10:42.989 "write_zeroes": true, 00:10:42.989 "zcopy": true, 00:10:42.989 "get_zone_info": false, 00:10:42.989 "zone_management": false, 00:10:42.989 "zone_append": false, 00:10:42.989 "compare": false, 00:10:42.989 "compare_and_write": false, 00:10:42.989 "abort": true, 00:10:42.989 "seek_hole": false, 00:10:42.989 "seek_data": false, 00:10:42.989 "copy": true, 00:10:42.989 "nvme_iov_md": false 00:10:42.989 }, 00:10:42.989 "memory_domains": [ 00:10:42.989 { 00:10:42.989 "dma_device_id": "system", 00:10:42.989 "dma_device_type": 1 00:10:42.989 }, 00:10:42.989 { 00:10:42.989 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:42.989 "dma_device_type": 2 00:10:42.989 } 00:10:42.989 ], 00:10:42.989 "driver_specific": {} 00:10:42.989 } 00:10:42.989 ] 00:10:42.989 23:28:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.989 23:28:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:42.989 23:28:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:42.989 23:28:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:42.989 23:28:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:42.989 23:28:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.989 23:28:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:42.989 [2024-09-30 23:28:22.742736] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:42.989 [2024-09-30 23:28:22.742828] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:42.989 [2024-09-30 23:28:22.742892] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:42.989 [2024-09-30 23:28:22.745050] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:42.989 [2024-09-30 23:28:22.745134] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:42.989 23:28:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.989 23:28:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:42.989 23:28:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:42.989 23:28:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:42.989 23:28:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:42.989 23:28:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:42.989 23:28:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:42.989 23:28:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:42.989 23:28:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:42.989 23:28:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:42.989 23:28:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:42.989 23:28:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:42.989 23:28:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.989 23:28:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:42.989 23:28:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:42.989 23:28:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.989 23:28:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:42.989 "name": "Existed_Raid", 00:10:42.989 "uuid": "d205d8fa-d283-4eac-89fb-c5fe810bcfe1", 00:10:42.989 "strip_size_kb": 0, 00:10:42.989 "state": "configuring", 00:10:42.989 "raid_level": "raid1", 00:10:42.989 "superblock": true, 00:10:42.989 "num_base_bdevs": 4, 00:10:42.989 "num_base_bdevs_discovered": 3, 00:10:42.989 "num_base_bdevs_operational": 4, 00:10:42.989 "base_bdevs_list": [ 00:10:42.989 { 00:10:42.989 "name": "BaseBdev1", 00:10:42.989 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:42.989 "is_configured": false, 00:10:42.989 "data_offset": 0, 00:10:42.989 "data_size": 0 00:10:42.989 }, 00:10:42.989 { 00:10:42.989 "name": "BaseBdev2", 00:10:42.989 "uuid": "e6c4b126-13e9-4b48-a23d-1c4d275e131b", 00:10:42.989 "is_configured": true, 00:10:42.989 "data_offset": 2048, 00:10:42.989 "data_size": 63488 00:10:42.989 }, 00:10:42.989 { 00:10:42.989 "name": "BaseBdev3", 00:10:42.989 "uuid": "aff4d175-566f-4295-99eb-de31db3bac81", 00:10:42.989 "is_configured": true, 00:10:42.989 "data_offset": 2048, 00:10:42.989 "data_size": 63488 00:10:42.989 }, 00:10:42.989 { 00:10:42.989 "name": "BaseBdev4", 00:10:42.989 "uuid": "b0cae7fd-b094-4395-8aac-8e9ad1f9a751", 00:10:42.989 "is_configured": true, 00:10:42.989 "data_offset": 2048, 00:10:42.989 "data_size": 63488 00:10:42.989 } 00:10:42.990 ] 00:10:42.990 }' 00:10:42.990 23:28:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:42.990 23:28:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:43.557 23:28:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:10:43.557 23:28:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.557 23:28:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:43.557 [2024-09-30 23:28:23.217981] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:43.557 23:28:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.557 23:28:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:43.557 23:28:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:43.557 23:28:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:43.557 23:28:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:43.557 23:28:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:43.557 23:28:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:43.557 23:28:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:43.557 23:28:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:43.557 23:28:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:43.557 23:28:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:43.557 23:28:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:43.557 23:28:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:43.557 23:28:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.557 23:28:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:43.557 23:28:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.557 23:28:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:43.557 "name": "Existed_Raid", 00:10:43.557 "uuid": "d205d8fa-d283-4eac-89fb-c5fe810bcfe1", 00:10:43.557 "strip_size_kb": 0, 00:10:43.557 "state": "configuring", 00:10:43.557 "raid_level": "raid1", 00:10:43.557 "superblock": true, 00:10:43.557 "num_base_bdevs": 4, 00:10:43.557 "num_base_bdevs_discovered": 2, 00:10:43.557 "num_base_bdevs_operational": 4, 00:10:43.557 "base_bdevs_list": [ 00:10:43.557 { 00:10:43.557 "name": "BaseBdev1", 00:10:43.557 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:43.557 "is_configured": false, 00:10:43.557 "data_offset": 0, 00:10:43.557 "data_size": 0 00:10:43.557 }, 00:10:43.557 { 00:10:43.557 "name": null, 00:10:43.557 "uuid": "e6c4b126-13e9-4b48-a23d-1c4d275e131b", 00:10:43.557 "is_configured": false, 00:10:43.557 "data_offset": 0, 00:10:43.557 "data_size": 63488 00:10:43.558 }, 00:10:43.558 { 00:10:43.558 "name": "BaseBdev3", 00:10:43.558 "uuid": "aff4d175-566f-4295-99eb-de31db3bac81", 00:10:43.558 "is_configured": true, 00:10:43.558 "data_offset": 2048, 00:10:43.558 "data_size": 63488 00:10:43.558 }, 00:10:43.558 { 00:10:43.558 "name": "BaseBdev4", 00:10:43.558 "uuid": "b0cae7fd-b094-4395-8aac-8e9ad1f9a751", 00:10:43.558 "is_configured": true, 00:10:43.558 "data_offset": 2048, 00:10:43.558 "data_size": 63488 00:10:43.558 } 00:10:43.558 ] 00:10:43.558 }' 00:10:43.558 23:28:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:43.558 23:28:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:43.815 23:28:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:43.815 23:28:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.815 23:28:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:43.815 23:28:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:43.815 23:28:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.074 23:28:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:10:44.074 23:28:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:44.074 23:28:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.074 23:28:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:44.074 [2024-09-30 23:28:23.713985] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:44.074 BaseBdev1 00:10:44.074 23:28:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.074 23:28:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:10:44.074 23:28:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:10:44.074 23:28:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:44.074 23:28:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:44.074 23:28:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:44.074 23:28:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:44.074 23:28:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:44.074 23:28:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.074 23:28:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:44.075 23:28:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.075 23:28:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:44.075 23:28:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.075 23:28:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:44.075 [ 00:10:44.075 { 00:10:44.075 "name": "BaseBdev1", 00:10:44.075 "aliases": [ 00:10:44.075 "27074221-fd49-4fef-a278-30142d9be2a4" 00:10:44.075 ], 00:10:44.075 "product_name": "Malloc disk", 00:10:44.075 "block_size": 512, 00:10:44.075 "num_blocks": 65536, 00:10:44.075 "uuid": "27074221-fd49-4fef-a278-30142d9be2a4", 00:10:44.075 "assigned_rate_limits": { 00:10:44.075 "rw_ios_per_sec": 0, 00:10:44.075 "rw_mbytes_per_sec": 0, 00:10:44.075 "r_mbytes_per_sec": 0, 00:10:44.075 "w_mbytes_per_sec": 0 00:10:44.075 }, 00:10:44.075 "claimed": true, 00:10:44.075 "claim_type": "exclusive_write", 00:10:44.075 "zoned": false, 00:10:44.075 "supported_io_types": { 00:10:44.075 "read": true, 00:10:44.075 "write": true, 00:10:44.075 "unmap": true, 00:10:44.075 "flush": true, 00:10:44.075 "reset": true, 00:10:44.075 "nvme_admin": false, 00:10:44.075 "nvme_io": false, 00:10:44.075 "nvme_io_md": false, 00:10:44.075 "write_zeroes": true, 00:10:44.075 "zcopy": true, 00:10:44.075 "get_zone_info": false, 00:10:44.075 "zone_management": false, 00:10:44.075 "zone_append": false, 00:10:44.075 "compare": false, 00:10:44.075 "compare_and_write": false, 00:10:44.075 "abort": true, 00:10:44.075 "seek_hole": false, 00:10:44.075 "seek_data": false, 00:10:44.075 "copy": true, 00:10:44.075 "nvme_iov_md": false 00:10:44.075 }, 00:10:44.075 "memory_domains": [ 00:10:44.075 { 00:10:44.075 "dma_device_id": "system", 00:10:44.075 "dma_device_type": 1 00:10:44.075 }, 00:10:44.075 { 00:10:44.075 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:44.075 "dma_device_type": 2 00:10:44.075 } 00:10:44.075 ], 00:10:44.075 "driver_specific": {} 00:10:44.075 } 00:10:44.075 ] 00:10:44.075 23:28:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.075 23:28:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:44.075 23:28:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:44.075 23:28:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:44.075 23:28:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:44.075 23:28:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:44.075 23:28:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:44.075 23:28:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:44.075 23:28:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:44.075 23:28:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:44.075 23:28:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:44.075 23:28:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:44.075 23:28:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:44.075 23:28:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:44.075 23:28:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.075 23:28:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:44.075 23:28:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.075 23:28:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:44.075 "name": "Existed_Raid", 00:10:44.075 "uuid": "d205d8fa-d283-4eac-89fb-c5fe810bcfe1", 00:10:44.075 "strip_size_kb": 0, 00:10:44.075 "state": "configuring", 00:10:44.075 "raid_level": "raid1", 00:10:44.075 "superblock": true, 00:10:44.075 "num_base_bdevs": 4, 00:10:44.075 "num_base_bdevs_discovered": 3, 00:10:44.075 "num_base_bdevs_operational": 4, 00:10:44.075 "base_bdevs_list": [ 00:10:44.075 { 00:10:44.075 "name": "BaseBdev1", 00:10:44.075 "uuid": "27074221-fd49-4fef-a278-30142d9be2a4", 00:10:44.075 "is_configured": true, 00:10:44.075 "data_offset": 2048, 00:10:44.075 "data_size": 63488 00:10:44.075 }, 00:10:44.075 { 00:10:44.075 "name": null, 00:10:44.075 "uuid": "e6c4b126-13e9-4b48-a23d-1c4d275e131b", 00:10:44.075 "is_configured": false, 00:10:44.075 "data_offset": 0, 00:10:44.075 "data_size": 63488 00:10:44.075 }, 00:10:44.075 { 00:10:44.075 "name": "BaseBdev3", 00:10:44.075 "uuid": "aff4d175-566f-4295-99eb-de31db3bac81", 00:10:44.075 "is_configured": true, 00:10:44.075 "data_offset": 2048, 00:10:44.075 "data_size": 63488 00:10:44.075 }, 00:10:44.075 { 00:10:44.075 "name": "BaseBdev4", 00:10:44.075 "uuid": "b0cae7fd-b094-4395-8aac-8e9ad1f9a751", 00:10:44.075 "is_configured": true, 00:10:44.075 "data_offset": 2048, 00:10:44.075 "data_size": 63488 00:10:44.075 } 00:10:44.075 ] 00:10:44.075 }' 00:10:44.075 23:28:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:44.075 23:28:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:44.645 23:28:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:44.645 23:28:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:44.645 23:28:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.645 23:28:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:44.645 23:28:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.645 23:28:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:10:44.645 23:28:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:10:44.645 23:28:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.645 23:28:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:44.645 [2024-09-30 23:28:24.269078] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:44.645 23:28:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.645 23:28:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:44.645 23:28:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:44.645 23:28:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:44.645 23:28:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:44.645 23:28:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:44.645 23:28:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:44.645 23:28:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:44.645 23:28:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:44.645 23:28:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:44.645 23:28:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:44.645 23:28:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:44.645 23:28:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.645 23:28:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:44.645 23:28:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:44.645 23:28:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.645 23:28:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:44.645 "name": "Existed_Raid", 00:10:44.645 "uuid": "d205d8fa-d283-4eac-89fb-c5fe810bcfe1", 00:10:44.645 "strip_size_kb": 0, 00:10:44.645 "state": "configuring", 00:10:44.645 "raid_level": "raid1", 00:10:44.645 "superblock": true, 00:10:44.645 "num_base_bdevs": 4, 00:10:44.645 "num_base_bdevs_discovered": 2, 00:10:44.645 "num_base_bdevs_operational": 4, 00:10:44.645 "base_bdevs_list": [ 00:10:44.645 { 00:10:44.645 "name": "BaseBdev1", 00:10:44.645 "uuid": "27074221-fd49-4fef-a278-30142d9be2a4", 00:10:44.645 "is_configured": true, 00:10:44.645 "data_offset": 2048, 00:10:44.645 "data_size": 63488 00:10:44.645 }, 00:10:44.645 { 00:10:44.645 "name": null, 00:10:44.645 "uuid": "e6c4b126-13e9-4b48-a23d-1c4d275e131b", 00:10:44.645 "is_configured": false, 00:10:44.645 "data_offset": 0, 00:10:44.645 "data_size": 63488 00:10:44.645 }, 00:10:44.645 { 00:10:44.645 "name": null, 00:10:44.645 "uuid": "aff4d175-566f-4295-99eb-de31db3bac81", 00:10:44.645 "is_configured": false, 00:10:44.645 "data_offset": 0, 00:10:44.645 "data_size": 63488 00:10:44.645 }, 00:10:44.645 { 00:10:44.645 "name": "BaseBdev4", 00:10:44.645 "uuid": "b0cae7fd-b094-4395-8aac-8e9ad1f9a751", 00:10:44.645 "is_configured": true, 00:10:44.645 "data_offset": 2048, 00:10:44.645 "data_size": 63488 00:10:44.645 } 00:10:44.645 ] 00:10:44.645 }' 00:10:44.645 23:28:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:44.645 23:28:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:44.904 23:28:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:44.904 23:28:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.904 23:28:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:44.905 23:28:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:44.905 23:28:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.905 23:28:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:10:44.905 23:28:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:10:44.905 23:28:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.905 23:28:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:44.905 [2024-09-30 23:28:24.748386] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:44.905 23:28:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.905 23:28:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:44.905 23:28:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:44.905 23:28:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:44.905 23:28:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:44.905 23:28:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:44.905 23:28:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:44.905 23:28:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:44.905 23:28:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:44.905 23:28:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:44.905 23:28:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:45.164 23:28:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:45.164 23:28:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.164 23:28:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:45.164 23:28:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:45.164 23:28:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.164 23:28:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:45.164 "name": "Existed_Raid", 00:10:45.164 "uuid": "d205d8fa-d283-4eac-89fb-c5fe810bcfe1", 00:10:45.164 "strip_size_kb": 0, 00:10:45.164 "state": "configuring", 00:10:45.164 "raid_level": "raid1", 00:10:45.164 "superblock": true, 00:10:45.164 "num_base_bdevs": 4, 00:10:45.164 "num_base_bdevs_discovered": 3, 00:10:45.164 "num_base_bdevs_operational": 4, 00:10:45.164 "base_bdevs_list": [ 00:10:45.164 { 00:10:45.164 "name": "BaseBdev1", 00:10:45.164 "uuid": "27074221-fd49-4fef-a278-30142d9be2a4", 00:10:45.164 "is_configured": true, 00:10:45.164 "data_offset": 2048, 00:10:45.164 "data_size": 63488 00:10:45.164 }, 00:10:45.164 { 00:10:45.164 "name": null, 00:10:45.164 "uuid": "e6c4b126-13e9-4b48-a23d-1c4d275e131b", 00:10:45.164 "is_configured": false, 00:10:45.164 "data_offset": 0, 00:10:45.164 "data_size": 63488 00:10:45.164 }, 00:10:45.164 { 00:10:45.164 "name": "BaseBdev3", 00:10:45.164 "uuid": "aff4d175-566f-4295-99eb-de31db3bac81", 00:10:45.164 "is_configured": true, 00:10:45.164 "data_offset": 2048, 00:10:45.164 "data_size": 63488 00:10:45.164 }, 00:10:45.164 { 00:10:45.164 "name": "BaseBdev4", 00:10:45.164 "uuid": "b0cae7fd-b094-4395-8aac-8e9ad1f9a751", 00:10:45.164 "is_configured": true, 00:10:45.164 "data_offset": 2048, 00:10:45.164 "data_size": 63488 00:10:45.164 } 00:10:45.164 ] 00:10:45.164 }' 00:10:45.164 23:28:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:45.164 23:28:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:45.427 23:28:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:45.427 23:28:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:45.427 23:28:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.427 23:28:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:45.427 23:28:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.427 23:28:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:10:45.427 23:28:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:45.427 23:28:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.427 23:28:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:45.427 [2024-09-30 23:28:25.239635] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:45.427 23:28:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.427 23:28:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:45.427 23:28:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:45.427 23:28:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:45.427 23:28:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:45.427 23:28:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:45.427 23:28:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:45.427 23:28:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:45.427 23:28:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:45.427 23:28:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:45.427 23:28:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:45.427 23:28:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:45.427 23:28:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:45.427 23:28:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.427 23:28:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:45.693 23:28:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.693 23:28:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:45.693 "name": "Existed_Raid", 00:10:45.693 "uuid": "d205d8fa-d283-4eac-89fb-c5fe810bcfe1", 00:10:45.693 "strip_size_kb": 0, 00:10:45.693 "state": "configuring", 00:10:45.693 "raid_level": "raid1", 00:10:45.693 "superblock": true, 00:10:45.693 "num_base_bdevs": 4, 00:10:45.693 "num_base_bdevs_discovered": 2, 00:10:45.693 "num_base_bdevs_operational": 4, 00:10:45.693 "base_bdevs_list": [ 00:10:45.693 { 00:10:45.693 "name": null, 00:10:45.693 "uuid": "27074221-fd49-4fef-a278-30142d9be2a4", 00:10:45.693 "is_configured": false, 00:10:45.693 "data_offset": 0, 00:10:45.693 "data_size": 63488 00:10:45.693 }, 00:10:45.693 { 00:10:45.693 "name": null, 00:10:45.693 "uuid": "e6c4b126-13e9-4b48-a23d-1c4d275e131b", 00:10:45.693 "is_configured": false, 00:10:45.693 "data_offset": 0, 00:10:45.693 "data_size": 63488 00:10:45.693 }, 00:10:45.693 { 00:10:45.693 "name": "BaseBdev3", 00:10:45.693 "uuid": "aff4d175-566f-4295-99eb-de31db3bac81", 00:10:45.693 "is_configured": true, 00:10:45.693 "data_offset": 2048, 00:10:45.693 "data_size": 63488 00:10:45.693 }, 00:10:45.693 { 00:10:45.693 "name": "BaseBdev4", 00:10:45.693 "uuid": "b0cae7fd-b094-4395-8aac-8e9ad1f9a751", 00:10:45.693 "is_configured": true, 00:10:45.693 "data_offset": 2048, 00:10:45.693 "data_size": 63488 00:10:45.693 } 00:10:45.693 ] 00:10:45.693 }' 00:10:45.693 23:28:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:45.693 23:28:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:45.961 23:28:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:45.961 23:28:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:45.961 23:28:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.961 23:28:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:45.961 23:28:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.961 23:28:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:10:45.961 23:28:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:10:45.961 23:28:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.961 23:28:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:45.961 [2024-09-30 23:28:25.673642] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:45.961 23:28:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.961 23:28:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:45.962 23:28:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:45.962 23:28:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:45.962 23:28:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:45.962 23:28:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:45.962 23:28:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:45.962 23:28:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:45.962 23:28:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:45.962 23:28:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:45.962 23:28:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:45.962 23:28:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:45.962 23:28:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:45.962 23:28:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.962 23:28:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:45.962 23:28:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.962 23:28:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:45.962 "name": "Existed_Raid", 00:10:45.962 "uuid": "d205d8fa-d283-4eac-89fb-c5fe810bcfe1", 00:10:45.962 "strip_size_kb": 0, 00:10:45.962 "state": "configuring", 00:10:45.962 "raid_level": "raid1", 00:10:45.962 "superblock": true, 00:10:45.962 "num_base_bdevs": 4, 00:10:45.962 "num_base_bdevs_discovered": 3, 00:10:45.962 "num_base_bdevs_operational": 4, 00:10:45.962 "base_bdevs_list": [ 00:10:45.962 { 00:10:45.962 "name": null, 00:10:45.962 "uuid": "27074221-fd49-4fef-a278-30142d9be2a4", 00:10:45.962 "is_configured": false, 00:10:45.962 "data_offset": 0, 00:10:45.962 "data_size": 63488 00:10:45.962 }, 00:10:45.962 { 00:10:45.962 "name": "BaseBdev2", 00:10:45.962 "uuid": "e6c4b126-13e9-4b48-a23d-1c4d275e131b", 00:10:45.962 "is_configured": true, 00:10:45.962 "data_offset": 2048, 00:10:45.962 "data_size": 63488 00:10:45.962 }, 00:10:45.962 { 00:10:45.962 "name": "BaseBdev3", 00:10:45.962 "uuid": "aff4d175-566f-4295-99eb-de31db3bac81", 00:10:45.962 "is_configured": true, 00:10:45.962 "data_offset": 2048, 00:10:45.962 "data_size": 63488 00:10:45.962 }, 00:10:45.962 { 00:10:45.962 "name": "BaseBdev4", 00:10:45.962 "uuid": "b0cae7fd-b094-4395-8aac-8e9ad1f9a751", 00:10:45.962 "is_configured": true, 00:10:45.962 "data_offset": 2048, 00:10:45.962 "data_size": 63488 00:10:45.962 } 00:10:45.962 ] 00:10:45.962 }' 00:10:45.962 23:28:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:45.962 23:28:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:46.528 23:28:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:46.528 23:28:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:46.528 23:28:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.528 23:28:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:46.528 23:28:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.528 23:28:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:10:46.528 23:28:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:46.528 23:28:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:10:46.528 23:28:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.528 23:28:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:46.528 23:28:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.528 23:28:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 27074221-fd49-4fef-a278-30142d9be2a4 00:10:46.528 23:28:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.528 23:28:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:46.528 [2024-09-30 23:28:26.232082] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:10:46.528 [2024-09-30 23:28:26.232318] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:10:46.528 [2024-09-30 23:28:26.232338] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:46.528 NewBaseBdev 00:10:46.528 [2024-09-30 23:28:26.232599] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:10:46.528 [2024-09-30 23:28:26.232793] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:10:46.528 [2024-09-30 23:28:26.232806] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006d00 00:10:46.528 [2024-09-30 23:28:26.232953] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:46.528 23:28:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.528 23:28:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:10:46.528 23:28:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:10:46.528 23:28:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:46.528 23:28:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:46.528 23:28:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:46.528 23:28:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:46.528 23:28:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:46.528 23:28:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.528 23:28:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:46.528 23:28:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.528 23:28:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:10:46.528 23:28:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.528 23:28:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:46.528 [ 00:10:46.528 { 00:10:46.528 "name": "NewBaseBdev", 00:10:46.528 "aliases": [ 00:10:46.528 "27074221-fd49-4fef-a278-30142d9be2a4" 00:10:46.528 ], 00:10:46.528 "product_name": "Malloc disk", 00:10:46.528 "block_size": 512, 00:10:46.528 "num_blocks": 65536, 00:10:46.528 "uuid": "27074221-fd49-4fef-a278-30142d9be2a4", 00:10:46.528 "assigned_rate_limits": { 00:10:46.528 "rw_ios_per_sec": 0, 00:10:46.528 "rw_mbytes_per_sec": 0, 00:10:46.528 "r_mbytes_per_sec": 0, 00:10:46.528 "w_mbytes_per_sec": 0 00:10:46.528 }, 00:10:46.528 "claimed": true, 00:10:46.528 "claim_type": "exclusive_write", 00:10:46.528 "zoned": false, 00:10:46.528 "supported_io_types": { 00:10:46.528 "read": true, 00:10:46.528 "write": true, 00:10:46.528 "unmap": true, 00:10:46.528 "flush": true, 00:10:46.528 "reset": true, 00:10:46.528 "nvme_admin": false, 00:10:46.529 "nvme_io": false, 00:10:46.529 "nvme_io_md": false, 00:10:46.529 "write_zeroes": true, 00:10:46.529 "zcopy": true, 00:10:46.529 "get_zone_info": false, 00:10:46.529 "zone_management": false, 00:10:46.529 "zone_append": false, 00:10:46.529 "compare": false, 00:10:46.529 "compare_and_write": false, 00:10:46.529 "abort": true, 00:10:46.529 "seek_hole": false, 00:10:46.529 "seek_data": false, 00:10:46.529 "copy": true, 00:10:46.529 "nvme_iov_md": false 00:10:46.529 }, 00:10:46.529 "memory_domains": [ 00:10:46.529 { 00:10:46.529 "dma_device_id": "system", 00:10:46.529 "dma_device_type": 1 00:10:46.529 }, 00:10:46.529 { 00:10:46.529 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:46.529 "dma_device_type": 2 00:10:46.529 } 00:10:46.529 ], 00:10:46.529 "driver_specific": {} 00:10:46.529 } 00:10:46.529 ] 00:10:46.529 23:28:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.529 23:28:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:46.529 23:28:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:10:46.529 23:28:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:46.529 23:28:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:46.529 23:28:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:46.529 23:28:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:46.529 23:28:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:46.529 23:28:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:46.529 23:28:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:46.529 23:28:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:46.529 23:28:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:46.529 23:28:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:46.529 23:28:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:46.529 23:28:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.529 23:28:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:46.529 23:28:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.529 23:28:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:46.529 "name": "Existed_Raid", 00:10:46.529 "uuid": "d205d8fa-d283-4eac-89fb-c5fe810bcfe1", 00:10:46.529 "strip_size_kb": 0, 00:10:46.529 "state": "online", 00:10:46.529 "raid_level": "raid1", 00:10:46.529 "superblock": true, 00:10:46.529 "num_base_bdevs": 4, 00:10:46.529 "num_base_bdevs_discovered": 4, 00:10:46.529 "num_base_bdevs_operational": 4, 00:10:46.529 "base_bdevs_list": [ 00:10:46.529 { 00:10:46.529 "name": "NewBaseBdev", 00:10:46.529 "uuid": "27074221-fd49-4fef-a278-30142d9be2a4", 00:10:46.529 "is_configured": true, 00:10:46.529 "data_offset": 2048, 00:10:46.529 "data_size": 63488 00:10:46.529 }, 00:10:46.529 { 00:10:46.529 "name": "BaseBdev2", 00:10:46.529 "uuid": "e6c4b126-13e9-4b48-a23d-1c4d275e131b", 00:10:46.529 "is_configured": true, 00:10:46.529 "data_offset": 2048, 00:10:46.529 "data_size": 63488 00:10:46.529 }, 00:10:46.529 { 00:10:46.529 "name": "BaseBdev3", 00:10:46.529 "uuid": "aff4d175-566f-4295-99eb-de31db3bac81", 00:10:46.529 "is_configured": true, 00:10:46.529 "data_offset": 2048, 00:10:46.529 "data_size": 63488 00:10:46.529 }, 00:10:46.529 { 00:10:46.529 "name": "BaseBdev4", 00:10:46.529 "uuid": "b0cae7fd-b094-4395-8aac-8e9ad1f9a751", 00:10:46.529 "is_configured": true, 00:10:46.529 "data_offset": 2048, 00:10:46.529 "data_size": 63488 00:10:46.529 } 00:10:46.529 ] 00:10:46.529 }' 00:10:46.529 23:28:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:46.529 23:28:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:47.095 23:28:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:10:47.095 23:28:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:47.095 23:28:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:47.095 23:28:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:47.095 23:28:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:10:47.095 23:28:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:47.095 23:28:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:47.095 23:28:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:47.095 23:28:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:47.095 23:28:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:47.095 [2024-09-30 23:28:26.671685] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:47.095 23:28:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:47.095 23:28:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:47.095 "name": "Existed_Raid", 00:10:47.095 "aliases": [ 00:10:47.095 "d205d8fa-d283-4eac-89fb-c5fe810bcfe1" 00:10:47.095 ], 00:10:47.095 "product_name": "Raid Volume", 00:10:47.095 "block_size": 512, 00:10:47.095 "num_blocks": 63488, 00:10:47.095 "uuid": "d205d8fa-d283-4eac-89fb-c5fe810bcfe1", 00:10:47.095 "assigned_rate_limits": { 00:10:47.095 "rw_ios_per_sec": 0, 00:10:47.095 "rw_mbytes_per_sec": 0, 00:10:47.095 "r_mbytes_per_sec": 0, 00:10:47.095 "w_mbytes_per_sec": 0 00:10:47.095 }, 00:10:47.095 "claimed": false, 00:10:47.095 "zoned": false, 00:10:47.095 "supported_io_types": { 00:10:47.095 "read": true, 00:10:47.095 "write": true, 00:10:47.095 "unmap": false, 00:10:47.095 "flush": false, 00:10:47.095 "reset": true, 00:10:47.095 "nvme_admin": false, 00:10:47.095 "nvme_io": false, 00:10:47.095 "nvme_io_md": false, 00:10:47.095 "write_zeroes": true, 00:10:47.095 "zcopy": false, 00:10:47.095 "get_zone_info": false, 00:10:47.095 "zone_management": false, 00:10:47.095 "zone_append": false, 00:10:47.095 "compare": false, 00:10:47.095 "compare_and_write": false, 00:10:47.095 "abort": false, 00:10:47.095 "seek_hole": false, 00:10:47.095 "seek_data": false, 00:10:47.095 "copy": false, 00:10:47.095 "nvme_iov_md": false 00:10:47.095 }, 00:10:47.095 "memory_domains": [ 00:10:47.095 { 00:10:47.095 "dma_device_id": "system", 00:10:47.095 "dma_device_type": 1 00:10:47.095 }, 00:10:47.095 { 00:10:47.095 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:47.095 "dma_device_type": 2 00:10:47.095 }, 00:10:47.095 { 00:10:47.095 "dma_device_id": "system", 00:10:47.095 "dma_device_type": 1 00:10:47.095 }, 00:10:47.095 { 00:10:47.095 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:47.095 "dma_device_type": 2 00:10:47.095 }, 00:10:47.095 { 00:10:47.095 "dma_device_id": "system", 00:10:47.095 "dma_device_type": 1 00:10:47.095 }, 00:10:47.095 { 00:10:47.095 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:47.095 "dma_device_type": 2 00:10:47.095 }, 00:10:47.095 { 00:10:47.095 "dma_device_id": "system", 00:10:47.095 "dma_device_type": 1 00:10:47.095 }, 00:10:47.095 { 00:10:47.095 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:47.095 "dma_device_type": 2 00:10:47.095 } 00:10:47.095 ], 00:10:47.095 "driver_specific": { 00:10:47.095 "raid": { 00:10:47.095 "uuid": "d205d8fa-d283-4eac-89fb-c5fe810bcfe1", 00:10:47.095 "strip_size_kb": 0, 00:10:47.095 "state": "online", 00:10:47.095 "raid_level": "raid1", 00:10:47.095 "superblock": true, 00:10:47.095 "num_base_bdevs": 4, 00:10:47.095 "num_base_bdevs_discovered": 4, 00:10:47.095 "num_base_bdevs_operational": 4, 00:10:47.095 "base_bdevs_list": [ 00:10:47.095 { 00:10:47.095 "name": "NewBaseBdev", 00:10:47.095 "uuid": "27074221-fd49-4fef-a278-30142d9be2a4", 00:10:47.095 "is_configured": true, 00:10:47.095 "data_offset": 2048, 00:10:47.095 "data_size": 63488 00:10:47.095 }, 00:10:47.095 { 00:10:47.095 "name": "BaseBdev2", 00:10:47.095 "uuid": "e6c4b126-13e9-4b48-a23d-1c4d275e131b", 00:10:47.095 "is_configured": true, 00:10:47.095 "data_offset": 2048, 00:10:47.095 "data_size": 63488 00:10:47.095 }, 00:10:47.095 { 00:10:47.095 "name": "BaseBdev3", 00:10:47.095 "uuid": "aff4d175-566f-4295-99eb-de31db3bac81", 00:10:47.095 "is_configured": true, 00:10:47.095 "data_offset": 2048, 00:10:47.095 "data_size": 63488 00:10:47.095 }, 00:10:47.095 { 00:10:47.095 "name": "BaseBdev4", 00:10:47.095 "uuid": "b0cae7fd-b094-4395-8aac-8e9ad1f9a751", 00:10:47.095 "is_configured": true, 00:10:47.095 "data_offset": 2048, 00:10:47.095 "data_size": 63488 00:10:47.095 } 00:10:47.095 ] 00:10:47.095 } 00:10:47.095 } 00:10:47.095 }' 00:10:47.095 23:28:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:47.095 23:28:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:10:47.095 BaseBdev2 00:10:47.095 BaseBdev3 00:10:47.095 BaseBdev4' 00:10:47.095 23:28:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:47.096 23:28:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:47.096 23:28:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:47.096 23:28:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:10:47.096 23:28:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:47.096 23:28:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:47.096 23:28:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:47.096 23:28:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:47.096 23:28:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:47.096 23:28:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:47.096 23:28:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:47.096 23:28:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:47.096 23:28:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:47.096 23:28:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:47.096 23:28:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:47.096 23:28:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:47.096 23:28:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:47.096 23:28:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:47.096 23:28:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:47.096 23:28:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:47.096 23:28:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:47.096 23:28:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:47.096 23:28:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:47.096 23:28:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:47.096 23:28:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:47.096 23:28:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:47.096 23:28:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:47.096 23:28:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:47.096 23:28:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:47.096 23:28:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:47.096 23:28:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:47.096 23:28:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:47.386 23:28:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:47.386 23:28:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:47.386 23:28:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:47.386 23:28:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:47.386 23:28:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:47.386 [2024-09-30 23:28:26.971170] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:47.386 [2024-09-30 23:28:26.971265] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:47.386 [2024-09-30 23:28:26.971358] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:47.387 [2024-09-30 23:28:26.971685] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:47.387 [2024-09-30 23:28:26.971707] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name Existed_Raid, state offline 00:10:47.387 23:28:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:47.387 23:28:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 84669 00:10:47.387 23:28:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 84669 ']' 00:10:47.387 23:28:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 84669 00:10:47.387 23:28:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:10:47.387 23:28:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:47.387 23:28:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 84669 00:10:47.387 23:28:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:47.387 23:28:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:47.387 23:28:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 84669' 00:10:47.387 killing process with pid 84669 00:10:47.387 23:28:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 84669 00:10:47.387 [2024-09-30 23:28:27.010902] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:47.387 23:28:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 84669 00:10:47.387 [2024-09-30 23:28:27.051907] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:47.646 23:28:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:10:47.646 00:10:47.646 real 0m9.518s 00:10:47.646 user 0m16.118s 00:10:47.646 sys 0m2.111s 00:10:47.646 23:28:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:47.646 23:28:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:47.646 ************************************ 00:10:47.646 END TEST raid_state_function_test_sb 00:10:47.646 ************************************ 00:10:47.646 23:28:27 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 4 00:10:47.646 23:28:27 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:10:47.646 23:28:27 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:47.646 23:28:27 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:47.646 ************************************ 00:10:47.646 START TEST raid_superblock_test 00:10:47.646 ************************************ 00:10:47.646 23:28:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test raid1 4 00:10:47.646 23:28:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:10:47.646 23:28:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:10:47.646 23:28:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:10:47.646 23:28:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:10:47.646 23:28:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:10:47.646 23:28:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:10:47.646 23:28:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:10:47.646 23:28:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:10:47.646 23:28:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:10:47.646 23:28:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:10:47.646 23:28:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:10:47.646 23:28:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:10:47.646 23:28:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:10:47.646 23:28:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:10:47.646 23:28:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:10:47.646 23:28:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=85319 00:10:47.646 23:28:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:10:47.646 23:28:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 85319 00:10:47.646 23:28:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 85319 ']' 00:10:47.646 23:28:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:47.646 23:28:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:47.646 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:47.647 23:28:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:47.647 23:28:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:47.647 23:28:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.647 [2024-09-30 23:28:27.461968] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:10:47.647 [2024-09-30 23:28:27.462081] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85319 ] 00:10:47.906 [2024-09-30 23:28:27.623005] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:47.906 [2024-09-30 23:28:27.668666] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:47.906 [2024-09-30 23:28:27.712410] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:47.906 [2024-09-30 23:28:27.712548] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:48.474 23:28:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:48.474 23:28:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:10:48.474 23:28:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:10:48.474 23:28:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:48.474 23:28:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:10:48.474 23:28:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:10:48.474 23:28:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:10:48.474 23:28:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:48.474 23:28:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:48.474 23:28:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:48.474 23:28:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:10:48.474 23:28:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:48.474 23:28:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.474 malloc1 00:10:48.474 23:28:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:48.474 23:28:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:48.474 23:28:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:48.474 23:28:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.474 [2024-09-30 23:28:28.295728] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:48.474 [2024-09-30 23:28:28.295916] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:48.474 [2024-09-30 23:28:28.295969] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:10:48.474 [2024-09-30 23:28:28.296033] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:48.474 [2024-09-30 23:28:28.298214] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:48.474 [2024-09-30 23:28:28.298301] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:48.474 pt1 00:10:48.474 23:28:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:48.474 23:28:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:48.474 23:28:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:48.474 23:28:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:10:48.474 23:28:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:10:48.474 23:28:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:10:48.474 23:28:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:48.474 23:28:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:48.474 23:28:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:48.474 23:28:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:10:48.474 23:28:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:48.474 23:28:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.734 malloc2 00:10:48.734 23:28:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:48.734 23:28:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:48.734 23:28:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:48.734 23:28:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.734 [2024-09-30 23:28:28.338494] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:48.734 [2024-09-30 23:28:28.338643] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:48.734 [2024-09-30 23:28:28.338685] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:10:48.734 [2024-09-30 23:28:28.338729] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:48.734 [2024-09-30 23:28:28.340910] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:48.734 [2024-09-30 23:28:28.340994] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:48.734 pt2 00:10:48.734 23:28:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:48.734 23:28:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:48.734 23:28:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:48.734 23:28:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:10:48.734 23:28:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:10:48.734 23:28:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:10:48.734 23:28:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:48.734 23:28:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:48.734 23:28:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:48.734 23:28:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:10:48.734 23:28:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:48.734 23:28:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.734 malloc3 00:10:48.734 23:28:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:48.734 23:28:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:48.734 23:28:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:48.734 23:28:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.734 [2024-09-30 23:28:28.371310] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:48.734 [2024-09-30 23:28:28.371446] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:48.734 [2024-09-30 23:28:28.371488] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:10:48.734 [2024-09-30 23:28:28.371525] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:48.734 [2024-09-30 23:28:28.373696] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:48.734 [2024-09-30 23:28:28.373797] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:48.734 pt3 00:10:48.734 23:28:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:48.734 23:28:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:48.734 23:28:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:48.734 23:28:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:10:48.734 23:28:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:10:48.734 23:28:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:10:48.734 23:28:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:48.734 23:28:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:48.734 23:28:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:48.734 23:28:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:10:48.734 23:28:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:48.734 23:28:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.734 malloc4 00:10:48.734 23:28:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:48.734 23:28:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:10:48.734 23:28:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:48.734 23:28:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.734 [2024-09-30 23:28:28.404040] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:10:48.734 [2024-09-30 23:28:28.404105] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:48.734 [2024-09-30 23:28:28.404123] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:10:48.734 [2024-09-30 23:28:28.404138] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:48.734 [2024-09-30 23:28:28.406222] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:48.734 [2024-09-30 23:28:28.406269] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:10:48.734 pt4 00:10:48.734 23:28:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:48.734 23:28:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:48.734 23:28:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:48.734 23:28:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:10:48.734 23:28:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:48.734 23:28:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.734 [2024-09-30 23:28:28.416097] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:48.734 [2024-09-30 23:28:28.417916] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:48.734 [2024-09-30 23:28:28.417987] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:48.734 [2024-09-30 23:28:28.418034] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:10:48.734 [2024-09-30 23:28:28.418199] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:10:48.734 [2024-09-30 23:28:28.418224] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:48.735 [2024-09-30 23:28:28.418515] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:10:48.735 [2024-09-30 23:28:28.418683] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:10:48.735 [2024-09-30 23:28:28.418696] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:10:48.735 [2024-09-30 23:28:28.418896] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:48.735 23:28:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:48.735 23:28:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:10:48.735 23:28:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:48.735 23:28:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:48.735 23:28:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:48.735 23:28:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:48.735 23:28:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:48.735 23:28:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:48.735 23:28:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:48.735 23:28:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:48.735 23:28:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:48.735 23:28:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:48.735 23:28:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:48.735 23:28:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:48.735 23:28:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.735 23:28:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:48.735 23:28:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:48.735 "name": "raid_bdev1", 00:10:48.735 "uuid": "c31dbbee-efcc-4a4c-9661-024d89901d2f", 00:10:48.735 "strip_size_kb": 0, 00:10:48.735 "state": "online", 00:10:48.735 "raid_level": "raid1", 00:10:48.735 "superblock": true, 00:10:48.735 "num_base_bdevs": 4, 00:10:48.735 "num_base_bdevs_discovered": 4, 00:10:48.735 "num_base_bdevs_operational": 4, 00:10:48.735 "base_bdevs_list": [ 00:10:48.735 { 00:10:48.735 "name": "pt1", 00:10:48.735 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:48.735 "is_configured": true, 00:10:48.735 "data_offset": 2048, 00:10:48.735 "data_size": 63488 00:10:48.735 }, 00:10:48.735 { 00:10:48.735 "name": "pt2", 00:10:48.735 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:48.735 "is_configured": true, 00:10:48.735 "data_offset": 2048, 00:10:48.735 "data_size": 63488 00:10:48.735 }, 00:10:48.735 { 00:10:48.735 "name": "pt3", 00:10:48.735 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:48.735 "is_configured": true, 00:10:48.735 "data_offset": 2048, 00:10:48.735 "data_size": 63488 00:10:48.735 }, 00:10:48.735 { 00:10:48.735 "name": "pt4", 00:10:48.735 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:48.735 "is_configured": true, 00:10:48.735 "data_offset": 2048, 00:10:48.735 "data_size": 63488 00:10:48.735 } 00:10:48.735 ] 00:10:48.735 }' 00:10:48.735 23:28:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:48.735 23:28:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.304 23:28:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:10:49.304 23:28:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:10:49.304 23:28:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:49.304 23:28:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:49.304 23:28:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:49.304 23:28:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:49.304 23:28:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:49.304 23:28:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:49.304 23:28:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.304 23:28:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:49.304 [2024-09-30 23:28:28.927574] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:49.304 23:28:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:49.304 23:28:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:49.304 "name": "raid_bdev1", 00:10:49.304 "aliases": [ 00:10:49.304 "c31dbbee-efcc-4a4c-9661-024d89901d2f" 00:10:49.304 ], 00:10:49.304 "product_name": "Raid Volume", 00:10:49.304 "block_size": 512, 00:10:49.304 "num_blocks": 63488, 00:10:49.304 "uuid": "c31dbbee-efcc-4a4c-9661-024d89901d2f", 00:10:49.304 "assigned_rate_limits": { 00:10:49.304 "rw_ios_per_sec": 0, 00:10:49.304 "rw_mbytes_per_sec": 0, 00:10:49.304 "r_mbytes_per_sec": 0, 00:10:49.304 "w_mbytes_per_sec": 0 00:10:49.304 }, 00:10:49.304 "claimed": false, 00:10:49.304 "zoned": false, 00:10:49.304 "supported_io_types": { 00:10:49.304 "read": true, 00:10:49.304 "write": true, 00:10:49.304 "unmap": false, 00:10:49.304 "flush": false, 00:10:49.304 "reset": true, 00:10:49.304 "nvme_admin": false, 00:10:49.304 "nvme_io": false, 00:10:49.304 "nvme_io_md": false, 00:10:49.304 "write_zeroes": true, 00:10:49.304 "zcopy": false, 00:10:49.304 "get_zone_info": false, 00:10:49.304 "zone_management": false, 00:10:49.304 "zone_append": false, 00:10:49.304 "compare": false, 00:10:49.304 "compare_and_write": false, 00:10:49.304 "abort": false, 00:10:49.304 "seek_hole": false, 00:10:49.304 "seek_data": false, 00:10:49.304 "copy": false, 00:10:49.304 "nvme_iov_md": false 00:10:49.304 }, 00:10:49.304 "memory_domains": [ 00:10:49.304 { 00:10:49.304 "dma_device_id": "system", 00:10:49.304 "dma_device_type": 1 00:10:49.304 }, 00:10:49.304 { 00:10:49.304 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:49.304 "dma_device_type": 2 00:10:49.304 }, 00:10:49.304 { 00:10:49.304 "dma_device_id": "system", 00:10:49.304 "dma_device_type": 1 00:10:49.304 }, 00:10:49.304 { 00:10:49.304 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:49.304 "dma_device_type": 2 00:10:49.304 }, 00:10:49.304 { 00:10:49.304 "dma_device_id": "system", 00:10:49.304 "dma_device_type": 1 00:10:49.304 }, 00:10:49.304 { 00:10:49.304 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:49.304 "dma_device_type": 2 00:10:49.304 }, 00:10:49.304 { 00:10:49.304 "dma_device_id": "system", 00:10:49.304 "dma_device_type": 1 00:10:49.304 }, 00:10:49.304 { 00:10:49.304 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:49.304 "dma_device_type": 2 00:10:49.304 } 00:10:49.304 ], 00:10:49.304 "driver_specific": { 00:10:49.304 "raid": { 00:10:49.304 "uuid": "c31dbbee-efcc-4a4c-9661-024d89901d2f", 00:10:49.304 "strip_size_kb": 0, 00:10:49.304 "state": "online", 00:10:49.304 "raid_level": "raid1", 00:10:49.304 "superblock": true, 00:10:49.304 "num_base_bdevs": 4, 00:10:49.304 "num_base_bdevs_discovered": 4, 00:10:49.304 "num_base_bdevs_operational": 4, 00:10:49.304 "base_bdevs_list": [ 00:10:49.304 { 00:10:49.304 "name": "pt1", 00:10:49.304 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:49.304 "is_configured": true, 00:10:49.304 "data_offset": 2048, 00:10:49.304 "data_size": 63488 00:10:49.304 }, 00:10:49.304 { 00:10:49.304 "name": "pt2", 00:10:49.304 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:49.304 "is_configured": true, 00:10:49.304 "data_offset": 2048, 00:10:49.304 "data_size": 63488 00:10:49.304 }, 00:10:49.304 { 00:10:49.304 "name": "pt3", 00:10:49.304 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:49.304 "is_configured": true, 00:10:49.304 "data_offset": 2048, 00:10:49.304 "data_size": 63488 00:10:49.304 }, 00:10:49.304 { 00:10:49.304 "name": "pt4", 00:10:49.304 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:49.304 "is_configured": true, 00:10:49.304 "data_offset": 2048, 00:10:49.304 "data_size": 63488 00:10:49.304 } 00:10:49.304 ] 00:10:49.304 } 00:10:49.304 } 00:10:49.304 }' 00:10:49.304 23:28:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:49.304 23:28:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:10:49.304 pt2 00:10:49.304 pt3 00:10:49.304 pt4' 00:10:49.304 23:28:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:49.304 23:28:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:49.304 23:28:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:49.304 23:28:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:10:49.304 23:28:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:49.304 23:28:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:49.304 23:28:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.304 23:28:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:49.305 23:28:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:49.305 23:28:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:49.305 23:28:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:49.305 23:28:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:10:49.305 23:28:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:49.305 23:28:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.305 23:28:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:49.305 23:28:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:49.305 23:28:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:49.305 23:28:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:49.305 23:28:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:49.305 23:28:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:10:49.305 23:28:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:49.305 23:28:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.305 23:28:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:49.564 23:28:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:49.564 23:28:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:49.564 23:28:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:49.564 23:28:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:49.564 23:28:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:10:49.564 23:28:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:49.564 23:28:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.564 23:28:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:49.564 23:28:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:49.564 23:28:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:49.564 23:28:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:49.564 23:28:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:10:49.564 23:28:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:49.564 23:28:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:49.564 23:28:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.564 [2024-09-30 23:28:29.258992] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:49.564 23:28:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:49.564 23:28:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=c31dbbee-efcc-4a4c-9661-024d89901d2f 00:10:49.564 23:28:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z c31dbbee-efcc-4a4c-9661-024d89901d2f ']' 00:10:49.564 23:28:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:49.565 23:28:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:49.565 23:28:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.565 [2024-09-30 23:28:29.306587] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:49.565 [2024-09-30 23:28:29.306678] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:49.565 [2024-09-30 23:28:29.306814] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:49.565 [2024-09-30 23:28:29.306953] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:49.565 [2024-09-30 23:28:29.306969] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:10:49.565 23:28:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:49.565 23:28:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:49.565 23:28:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:49.565 23:28:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.565 23:28:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:10:49.565 23:28:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:49.565 23:28:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:10:49.565 23:28:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:10:49.565 23:28:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:49.565 23:28:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:10:49.565 23:28:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:49.565 23:28:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.565 23:28:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:49.565 23:28:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:49.565 23:28:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:10:49.565 23:28:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:49.565 23:28:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.565 23:28:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:49.565 23:28:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:49.565 23:28:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:10:49.565 23:28:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:49.565 23:28:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.565 23:28:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:49.565 23:28:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:49.565 23:28:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:10:49.565 23:28:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:49.565 23:28:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.565 23:28:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:49.565 23:28:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:10:49.565 23:28:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:49.565 23:28:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.565 23:28:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:10:49.824 23:28:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:49.824 23:28:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:10:49.824 23:28:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:10:49.824 23:28:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:10:49.824 23:28:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:10:49.824 23:28:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:10:49.824 23:28:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:49.824 23:28:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:10:49.824 23:28:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:49.824 23:28:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:10:49.824 23:28:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:49.824 23:28:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.824 [2024-09-30 23:28:29.462375] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:10:49.824 [2024-09-30 23:28:29.464414] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:10:49.824 [2024-09-30 23:28:29.464549] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:10:49.824 [2024-09-30 23:28:29.464590] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:10:49.824 [2024-09-30 23:28:29.464645] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:10:49.824 [2024-09-30 23:28:29.464705] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:10:49.824 [2024-09-30 23:28:29.464733] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:10:49.824 [2024-09-30 23:28:29.464754] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:10:49.824 [2024-09-30 23:28:29.464773] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:49.824 [2024-09-30 23:28:29.464793] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state configuring 00:10:49.824 request: 00:10:49.824 { 00:10:49.824 "name": "raid_bdev1", 00:10:49.824 "raid_level": "raid1", 00:10:49.824 "base_bdevs": [ 00:10:49.824 "malloc1", 00:10:49.824 "malloc2", 00:10:49.824 "malloc3", 00:10:49.824 "malloc4" 00:10:49.824 ], 00:10:49.824 "superblock": false, 00:10:49.824 "method": "bdev_raid_create", 00:10:49.824 "req_id": 1 00:10:49.824 } 00:10:49.824 Got JSON-RPC error response 00:10:49.824 response: 00:10:49.824 { 00:10:49.824 "code": -17, 00:10:49.824 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:10:49.824 } 00:10:49.824 23:28:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:10:49.824 23:28:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:10:49.824 23:28:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:10:49.824 23:28:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:10:49.824 23:28:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:10:49.824 23:28:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:49.824 23:28:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:10:49.824 23:28:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:49.824 23:28:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.824 23:28:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:49.824 23:28:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:10:49.824 23:28:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:10:49.824 23:28:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:49.824 23:28:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:49.824 23:28:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.824 [2024-09-30 23:28:29.518220] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:49.824 [2024-09-30 23:28:29.518329] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:49.825 [2024-09-30 23:28:29.518372] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:10:49.825 [2024-09-30 23:28:29.518405] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:49.825 [2024-09-30 23:28:29.520619] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:49.825 [2024-09-30 23:28:29.520701] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:49.825 [2024-09-30 23:28:29.520804] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:10:49.825 [2024-09-30 23:28:29.520906] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:49.825 pt1 00:10:49.825 23:28:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:49.825 23:28:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:10:49.825 23:28:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:49.825 23:28:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:49.825 23:28:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:49.825 23:28:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:49.825 23:28:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:49.825 23:28:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:49.825 23:28:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:49.825 23:28:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:49.825 23:28:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:49.825 23:28:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:49.825 23:28:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:49.825 23:28:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.825 23:28:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:49.825 23:28:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:49.825 23:28:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:49.825 "name": "raid_bdev1", 00:10:49.825 "uuid": "c31dbbee-efcc-4a4c-9661-024d89901d2f", 00:10:49.825 "strip_size_kb": 0, 00:10:49.825 "state": "configuring", 00:10:49.825 "raid_level": "raid1", 00:10:49.825 "superblock": true, 00:10:49.825 "num_base_bdevs": 4, 00:10:49.825 "num_base_bdevs_discovered": 1, 00:10:49.825 "num_base_bdevs_operational": 4, 00:10:49.825 "base_bdevs_list": [ 00:10:49.825 { 00:10:49.825 "name": "pt1", 00:10:49.825 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:49.825 "is_configured": true, 00:10:49.825 "data_offset": 2048, 00:10:49.825 "data_size": 63488 00:10:49.825 }, 00:10:49.825 { 00:10:49.825 "name": null, 00:10:49.825 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:49.825 "is_configured": false, 00:10:49.825 "data_offset": 2048, 00:10:49.825 "data_size": 63488 00:10:49.825 }, 00:10:49.825 { 00:10:49.825 "name": null, 00:10:49.825 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:49.825 "is_configured": false, 00:10:49.825 "data_offset": 2048, 00:10:49.825 "data_size": 63488 00:10:49.825 }, 00:10:49.825 { 00:10:49.825 "name": null, 00:10:49.825 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:49.825 "is_configured": false, 00:10:49.825 "data_offset": 2048, 00:10:49.825 "data_size": 63488 00:10:49.825 } 00:10:49.825 ] 00:10:49.825 }' 00:10:49.825 23:28:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:49.825 23:28:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.393 23:28:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:10:50.393 23:28:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:50.393 23:28:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:50.393 23:28:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.393 [2024-09-30 23:28:29.961503] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:50.393 [2024-09-30 23:28:29.961578] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:50.393 [2024-09-30 23:28:29.961603] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:10:50.393 [2024-09-30 23:28:29.961615] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:50.393 [2024-09-30 23:28:29.962089] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:50.393 [2024-09-30 23:28:29.962119] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:50.393 [2024-09-30 23:28:29.962228] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:50.393 [2024-09-30 23:28:29.962264] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:50.393 pt2 00:10:50.393 23:28:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:50.393 23:28:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:10:50.394 23:28:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:50.394 23:28:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.394 [2024-09-30 23:28:29.973489] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:10:50.394 23:28:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:50.394 23:28:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:10:50.394 23:28:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:50.394 23:28:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:50.394 23:28:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:50.394 23:28:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:50.394 23:28:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:50.394 23:28:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:50.394 23:28:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:50.394 23:28:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:50.394 23:28:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:50.394 23:28:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:50.394 23:28:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:50.394 23:28:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:50.394 23:28:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.394 23:28:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:50.394 23:28:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:50.394 "name": "raid_bdev1", 00:10:50.394 "uuid": "c31dbbee-efcc-4a4c-9661-024d89901d2f", 00:10:50.394 "strip_size_kb": 0, 00:10:50.394 "state": "configuring", 00:10:50.394 "raid_level": "raid1", 00:10:50.394 "superblock": true, 00:10:50.394 "num_base_bdevs": 4, 00:10:50.394 "num_base_bdevs_discovered": 1, 00:10:50.394 "num_base_bdevs_operational": 4, 00:10:50.394 "base_bdevs_list": [ 00:10:50.394 { 00:10:50.394 "name": "pt1", 00:10:50.394 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:50.394 "is_configured": true, 00:10:50.394 "data_offset": 2048, 00:10:50.394 "data_size": 63488 00:10:50.394 }, 00:10:50.394 { 00:10:50.394 "name": null, 00:10:50.394 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:50.394 "is_configured": false, 00:10:50.394 "data_offset": 0, 00:10:50.394 "data_size": 63488 00:10:50.394 }, 00:10:50.394 { 00:10:50.394 "name": null, 00:10:50.394 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:50.394 "is_configured": false, 00:10:50.394 "data_offset": 2048, 00:10:50.394 "data_size": 63488 00:10:50.394 }, 00:10:50.394 { 00:10:50.394 "name": null, 00:10:50.394 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:50.394 "is_configured": false, 00:10:50.394 "data_offset": 2048, 00:10:50.394 "data_size": 63488 00:10:50.394 } 00:10:50.394 ] 00:10:50.394 }' 00:10:50.394 23:28:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:50.394 23:28:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.653 23:28:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:10:50.653 23:28:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:50.654 23:28:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:50.654 23:28:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:50.654 23:28:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.654 [2024-09-30 23:28:30.396797] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:50.654 [2024-09-30 23:28:30.396981] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:50.654 [2024-09-30 23:28:30.397037] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:10:50.654 [2024-09-30 23:28:30.397087] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:50.654 [2024-09-30 23:28:30.397544] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:50.654 [2024-09-30 23:28:30.397618] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:50.654 [2024-09-30 23:28:30.397743] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:50.654 [2024-09-30 23:28:30.397805] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:50.654 pt2 00:10:50.654 23:28:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:50.654 23:28:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:50.654 23:28:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:50.654 23:28:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:50.654 23:28:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:50.654 23:28:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.654 [2024-09-30 23:28:30.408719] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:50.654 [2024-09-30 23:28:30.408845] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:50.654 [2024-09-30 23:28:30.408885] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:10:50.654 [2024-09-30 23:28:30.408899] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:50.654 [2024-09-30 23:28:30.409271] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:50.654 [2024-09-30 23:28:30.409296] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:50.654 [2024-09-30 23:28:30.409362] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:10:50.654 [2024-09-30 23:28:30.409386] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:50.654 pt3 00:10:50.654 23:28:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:50.654 23:28:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:50.654 23:28:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:50.654 23:28:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:10:50.654 23:28:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:50.654 23:28:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.654 [2024-09-30 23:28:30.420705] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:10:50.654 [2024-09-30 23:28:30.420762] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:50.654 [2024-09-30 23:28:30.420778] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:10:50.654 [2024-09-30 23:28:30.420790] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:50.654 [2024-09-30 23:28:30.421122] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:50.654 [2024-09-30 23:28:30.421146] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:10:50.654 [2024-09-30 23:28:30.421204] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:10:50.654 [2024-09-30 23:28:30.421243] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:10:50.654 [2024-09-30 23:28:30.421346] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:10:50.654 [2024-09-30 23:28:30.421359] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:50.654 [2024-09-30 23:28:30.421606] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:10:50.654 [2024-09-30 23:28:30.421737] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:10:50.654 [2024-09-30 23:28:30.421749] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006980 00:10:50.654 [2024-09-30 23:28:30.421886] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:50.654 pt4 00:10:50.654 23:28:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:50.654 23:28:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:50.654 23:28:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:50.654 23:28:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:10:50.654 23:28:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:50.654 23:28:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:50.654 23:28:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:50.654 23:28:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:50.654 23:28:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:50.654 23:28:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:50.654 23:28:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:50.654 23:28:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:50.654 23:28:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:50.654 23:28:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:50.654 23:28:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:50.654 23:28:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:50.654 23:28:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.654 23:28:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:50.654 23:28:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:50.654 "name": "raid_bdev1", 00:10:50.654 "uuid": "c31dbbee-efcc-4a4c-9661-024d89901d2f", 00:10:50.654 "strip_size_kb": 0, 00:10:50.654 "state": "online", 00:10:50.654 "raid_level": "raid1", 00:10:50.654 "superblock": true, 00:10:50.654 "num_base_bdevs": 4, 00:10:50.654 "num_base_bdevs_discovered": 4, 00:10:50.654 "num_base_bdevs_operational": 4, 00:10:50.654 "base_bdevs_list": [ 00:10:50.654 { 00:10:50.654 "name": "pt1", 00:10:50.654 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:50.654 "is_configured": true, 00:10:50.654 "data_offset": 2048, 00:10:50.654 "data_size": 63488 00:10:50.654 }, 00:10:50.654 { 00:10:50.654 "name": "pt2", 00:10:50.654 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:50.654 "is_configured": true, 00:10:50.654 "data_offset": 2048, 00:10:50.654 "data_size": 63488 00:10:50.654 }, 00:10:50.654 { 00:10:50.654 "name": "pt3", 00:10:50.654 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:50.654 "is_configured": true, 00:10:50.654 "data_offset": 2048, 00:10:50.654 "data_size": 63488 00:10:50.654 }, 00:10:50.654 { 00:10:50.654 "name": "pt4", 00:10:50.654 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:50.654 "is_configured": true, 00:10:50.654 "data_offset": 2048, 00:10:50.654 "data_size": 63488 00:10:50.654 } 00:10:50.654 ] 00:10:50.654 }' 00:10:50.654 23:28:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:50.654 23:28:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.223 23:28:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:10:51.223 23:28:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:10:51.223 23:28:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:51.223 23:28:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:51.223 23:28:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:51.223 23:28:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:51.223 23:28:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:51.223 23:28:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.223 23:28:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.223 23:28:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:51.223 [2024-09-30 23:28:30.852336] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:51.223 23:28:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.223 23:28:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:51.223 "name": "raid_bdev1", 00:10:51.223 "aliases": [ 00:10:51.224 "c31dbbee-efcc-4a4c-9661-024d89901d2f" 00:10:51.224 ], 00:10:51.224 "product_name": "Raid Volume", 00:10:51.224 "block_size": 512, 00:10:51.224 "num_blocks": 63488, 00:10:51.224 "uuid": "c31dbbee-efcc-4a4c-9661-024d89901d2f", 00:10:51.224 "assigned_rate_limits": { 00:10:51.224 "rw_ios_per_sec": 0, 00:10:51.224 "rw_mbytes_per_sec": 0, 00:10:51.224 "r_mbytes_per_sec": 0, 00:10:51.224 "w_mbytes_per_sec": 0 00:10:51.224 }, 00:10:51.224 "claimed": false, 00:10:51.224 "zoned": false, 00:10:51.224 "supported_io_types": { 00:10:51.224 "read": true, 00:10:51.224 "write": true, 00:10:51.224 "unmap": false, 00:10:51.224 "flush": false, 00:10:51.224 "reset": true, 00:10:51.224 "nvme_admin": false, 00:10:51.224 "nvme_io": false, 00:10:51.224 "nvme_io_md": false, 00:10:51.224 "write_zeroes": true, 00:10:51.224 "zcopy": false, 00:10:51.224 "get_zone_info": false, 00:10:51.224 "zone_management": false, 00:10:51.224 "zone_append": false, 00:10:51.224 "compare": false, 00:10:51.224 "compare_and_write": false, 00:10:51.224 "abort": false, 00:10:51.224 "seek_hole": false, 00:10:51.224 "seek_data": false, 00:10:51.224 "copy": false, 00:10:51.224 "nvme_iov_md": false 00:10:51.224 }, 00:10:51.224 "memory_domains": [ 00:10:51.224 { 00:10:51.224 "dma_device_id": "system", 00:10:51.224 "dma_device_type": 1 00:10:51.224 }, 00:10:51.224 { 00:10:51.224 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:51.224 "dma_device_type": 2 00:10:51.224 }, 00:10:51.224 { 00:10:51.224 "dma_device_id": "system", 00:10:51.224 "dma_device_type": 1 00:10:51.224 }, 00:10:51.224 { 00:10:51.224 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:51.224 "dma_device_type": 2 00:10:51.224 }, 00:10:51.224 { 00:10:51.224 "dma_device_id": "system", 00:10:51.224 "dma_device_type": 1 00:10:51.224 }, 00:10:51.224 { 00:10:51.224 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:51.224 "dma_device_type": 2 00:10:51.224 }, 00:10:51.224 { 00:10:51.224 "dma_device_id": "system", 00:10:51.224 "dma_device_type": 1 00:10:51.224 }, 00:10:51.224 { 00:10:51.224 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:51.224 "dma_device_type": 2 00:10:51.224 } 00:10:51.224 ], 00:10:51.224 "driver_specific": { 00:10:51.224 "raid": { 00:10:51.224 "uuid": "c31dbbee-efcc-4a4c-9661-024d89901d2f", 00:10:51.224 "strip_size_kb": 0, 00:10:51.224 "state": "online", 00:10:51.224 "raid_level": "raid1", 00:10:51.224 "superblock": true, 00:10:51.224 "num_base_bdevs": 4, 00:10:51.224 "num_base_bdevs_discovered": 4, 00:10:51.224 "num_base_bdevs_operational": 4, 00:10:51.224 "base_bdevs_list": [ 00:10:51.224 { 00:10:51.224 "name": "pt1", 00:10:51.224 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:51.224 "is_configured": true, 00:10:51.224 "data_offset": 2048, 00:10:51.224 "data_size": 63488 00:10:51.224 }, 00:10:51.224 { 00:10:51.224 "name": "pt2", 00:10:51.224 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:51.224 "is_configured": true, 00:10:51.224 "data_offset": 2048, 00:10:51.224 "data_size": 63488 00:10:51.224 }, 00:10:51.224 { 00:10:51.224 "name": "pt3", 00:10:51.224 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:51.224 "is_configured": true, 00:10:51.224 "data_offset": 2048, 00:10:51.224 "data_size": 63488 00:10:51.224 }, 00:10:51.224 { 00:10:51.224 "name": "pt4", 00:10:51.224 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:51.224 "is_configured": true, 00:10:51.224 "data_offset": 2048, 00:10:51.224 "data_size": 63488 00:10:51.224 } 00:10:51.224 ] 00:10:51.224 } 00:10:51.224 } 00:10:51.224 }' 00:10:51.224 23:28:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:51.224 23:28:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:10:51.224 pt2 00:10:51.224 pt3 00:10:51.224 pt4' 00:10:51.224 23:28:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:51.224 23:28:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:51.224 23:28:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:51.224 23:28:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:51.224 23:28:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:10:51.224 23:28:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.224 23:28:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.224 23:28:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.224 23:28:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:51.224 23:28:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:51.224 23:28:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:51.224 23:28:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:10:51.224 23:28:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:51.224 23:28:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.224 23:28:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.224 23:28:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.224 23:28:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:51.224 23:28:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:51.224 23:28:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:51.224 23:28:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:10:51.224 23:28:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.224 23:28:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.224 23:28:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:51.224 23:28:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.484 23:28:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:51.484 23:28:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:51.484 23:28:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:51.484 23:28:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:51.484 23:28:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:10:51.484 23:28:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.484 23:28:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.484 23:28:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.484 23:28:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:51.484 23:28:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:51.484 23:28:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:51.484 23:28:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.484 23:28:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.484 23:28:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:10:51.484 [2024-09-30 23:28:31.151828] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:51.484 23:28:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.484 23:28:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' c31dbbee-efcc-4a4c-9661-024d89901d2f '!=' c31dbbee-efcc-4a4c-9661-024d89901d2f ']' 00:10:51.484 23:28:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:10:51.484 23:28:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:51.484 23:28:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:10:51.484 23:28:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:10:51.484 23:28:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.484 23:28:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.484 [2024-09-30 23:28:31.199463] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:10:51.484 23:28:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.484 23:28:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:10:51.484 23:28:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:51.484 23:28:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:51.484 23:28:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:51.484 23:28:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:51.484 23:28:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:51.485 23:28:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:51.485 23:28:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:51.485 23:28:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:51.485 23:28:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:51.485 23:28:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:51.485 23:28:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:51.485 23:28:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.485 23:28:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.485 23:28:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.485 23:28:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:51.485 "name": "raid_bdev1", 00:10:51.485 "uuid": "c31dbbee-efcc-4a4c-9661-024d89901d2f", 00:10:51.485 "strip_size_kb": 0, 00:10:51.485 "state": "online", 00:10:51.485 "raid_level": "raid1", 00:10:51.485 "superblock": true, 00:10:51.485 "num_base_bdevs": 4, 00:10:51.485 "num_base_bdevs_discovered": 3, 00:10:51.485 "num_base_bdevs_operational": 3, 00:10:51.485 "base_bdevs_list": [ 00:10:51.485 { 00:10:51.485 "name": null, 00:10:51.485 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:51.485 "is_configured": false, 00:10:51.485 "data_offset": 0, 00:10:51.485 "data_size": 63488 00:10:51.485 }, 00:10:51.485 { 00:10:51.485 "name": "pt2", 00:10:51.485 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:51.485 "is_configured": true, 00:10:51.485 "data_offset": 2048, 00:10:51.485 "data_size": 63488 00:10:51.485 }, 00:10:51.485 { 00:10:51.485 "name": "pt3", 00:10:51.485 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:51.485 "is_configured": true, 00:10:51.485 "data_offset": 2048, 00:10:51.485 "data_size": 63488 00:10:51.485 }, 00:10:51.485 { 00:10:51.485 "name": "pt4", 00:10:51.485 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:51.485 "is_configured": true, 00:10:51.485 "data_offset": 2048, 00:10:51.485 "data_size": 63488 00:10:51.485 } 00:10:51.485 ] 00:10:51.485 }' 00:10:51.485 23:28:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:51.485 23:28:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.054 23:28:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:52.054 23:28:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:52.054 23:28:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.054 [2024-09-30 23:28:31.642611] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:52.054 [2024-09-30 23:28:31.642648] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:52.054 [2024-09-30 23:28:31.642739] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:52.054 [2024-09-30 23:28:31.642816] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:52.054 [2024-09-30 23:28:31.642830] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name raid_bdev1, state offline 00:10:52.054 23:28:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:52.054 23:28:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:52.054 23:28:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:10:52.054 23:28:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:52.054 23:28:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.054 23:28:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:52.054 23:28:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:10:52.054 23:28:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:10:52.054 23:28:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:10:52.054 23:28:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:10:52.054 23:28:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:10:52.054 23:28:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:52.054 23:28:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.054 23:28:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:52.054 23:28:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:10:52.055 23:28:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:10:52.055 23:28:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:10:52.055 23:28:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:52.055 23:28:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.055 23:28:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:52.055 23:28:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:10:52.055 23:28:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:10:52.055 23:28:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt4 00:10:52.055 23:28:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:52.055 23:28:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.055 23:28:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:52.055 23:28:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:10:52.055 23:28:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:10:52.055 23:28:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:10:52.055 23:28:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:10:52.055 23:28:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:52.055 23:28:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:52.055 23:28:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.055 [2024-09-30 23:28:31.738436] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:52.055 [2024-09-30 23:28:31.738506] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:52.055 [2024-09-30 23:28:31.738525] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:10:52.055 [2024-09-30 23:28:31.738538] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:52.055 [2024-09-30 23:28:31.740803] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:52.055 [2024-09-30 23:28:31.740908] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:52.055 [2024-09-30 23:28:31.741033] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:52.055 [2024-09-30 23:28:31.741095] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:52.055 pt2 00:10:52.055 23:28:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:52.055 23:28:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:10:52.055 23:28:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:52.055 23:28:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:52.055 23:28:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:52.055 23:28:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:52.055 23:28:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:52.055 23:28:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:52.055 23:28:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:52.055 23:28:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:52.055 23:28:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:52.055 23:28:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:52.055 23:28:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:52.055 23:28:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:52.055 23:28:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.055 23:28:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:52.055 23:28:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:52.055 "name": "raid_bdev1", 00:10:52.055 "uuid": "c31dbbee-efcc-4a4c-9661-024d89901d2f", 00:10:52.055 "strip_size_kb": 0, 00:10:52.055 "state": "configuring", 00:10:52.055 "raid_level": "raid1", 00:10:52.055 "superblock": true, 00:10:52.055 "num_base_bdevs": 4, 00:10:52.055 "num_base_bdevs_discovered": 1, 00:10:52.055 "num_base_bdevs_operational": 3, 00:10:52.055 "base_bdevs_list": [ 00:10:52.055 { 00:10:52.055 "name": null, 00:10:52.055 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:52.055 "is_configured": false, 00:10:52.055 "data_offset": 2048, 00:10:52.055 "data_size": 63488 00:10:52.055 }, 00:10:52.055 { 00:10:52.055 "name": "pt2", 00:10:52.055 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:52.055 "is_configured": true, 00:10:52.055 "data_offset": 2048, 00:10:52.055 "data_size": 63488 00:10:52.055 }, 00:10:52.055 { 00:10:52.055 "name": null, 00:10:52.055 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:52.055 "is_configured": false, 00:10:52.055 "data_offset": 2048, 00:10:52.055 "data_size": 63488 00:10:52.055 }, 00:10:52.055 { 00:10:52.055 "name": null, 00:10:52.055 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:52.055 "is_configured": false, 00:10:52.055 "data_offset": 2048, 00:10:52.055 "data_size": 63488 00:10:52.055 } 00:10:52.055 ] 00:10:52.055 }' 00:10:52.055 23:28:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:52.055 23:28:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.625 23:28:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:10:52.625 23:28:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:10:52.625 23:28:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:52.625 23:28:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:52.625 23:28:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.625 [2024-09-30 23:28:32.177756] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:52.625 [2024-09-30 23:28:32.177839] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:52.625 [2024-09-30 23:28:32.177873] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:10:52.625 [2024-09-30 23:28:32.177890] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:52.625 [2024-09-30 23:28:32.178348] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:52.625 [2024-09-30 23:28:32.178396] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:52.625 [2024-09-30 23:28:32.178501] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:10:52.625 [2024-09-30 23:28:32.178542] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:52.625 pt3 00:10:52.625 23:28:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:52.625 23:28:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:10:52.625 23:28:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:52.625 23:28:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:52.625 23:28:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:52.625 23:28:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:52.625 23:28:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:52.625 23:28:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:52.625 23:28:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:52.625 23:28:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:52.625 23:28:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:52.625 23:28:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:52.625 23:28:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:52.625 23:28:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:52.625 23:28:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.625 23:28:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:52.625 23:28:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:52.625 "name": "raid_bdev1", 00:10:52.625 "uuid": "c31dbbee-efcc-4a4c-9661-024d89901d2f", 00:10:52.625 "strip_size_kb": 0, 00:10:52.625 "state": "configuring", 00:10:52.625 "raid_level": "raid1", 00:10:52.625 "superblock": true, 00:10:52.625 "num_base_bdevs": 4, 00:10:52.625 "num_base_bdevs_discovered": 2, 00:10:52.625 "num_base_bdevs_operational": 3, 00:10:52.625 "base_bdevs_list": [ 00:10:52.625 { 00:10:52.625 "name": null, 00:10:52.625 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:52.625 "is_configured": false, 00:10:52.625 "data_offset": 2048, 00:10:52.625 "data_size": 63488 00:10:52.625 }, 00:10:52.625 { 00:10:52.625 "name": "pt2", 00:10:52.625 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:52.625 "is_configured": true, 00:10:52.625 "data_offset": 2048, 00:10:52.625 "data_size": 63488 00:10:52.625 }, 00:10:52.625 { 00:10:52.625 "name": "pt3", 00:10:52.625 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:52.625 "is_configured": true, 00:10:52.625 "data_offset": 2048, 00:10:52.625 "data_size": 63488 00:10:52.625 }, 00:10:52.625 { 00:10:52.625 "name": null, 00:10:52.625 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:52.625 "is_configured": false, 00:10:52.625 "data_offset": 2048, 00:10:52.625 "data_size": 63488 00:10:52.625 } 00:10:52.625 ] 00:10:52.625 }' 00:10:52.625 23:28:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:52.625 23:28:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.885 23:28:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:10:52.885 23:28:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:10:52.885 23:28:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=3 00:10:52.885 23:28:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:10:52.885 23:28:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:52.885 23:28:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.885 [2024-09-30 23:28:32.609000] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:10:52.885 [2024-09-30 23:28:32.609199] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:52.885 [2024-09-30 23:28:32.609232] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:10:52.885 [2024-09-30 23:28:32.609247] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:52.885 [2024-09-30 23:28:32.609688] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:52.885 [2024-09-30 23:28:32.609713] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:10:52.885 [2024-09-30 23:28:32.609803] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:10:52.885 [2024-09-30 23:28:32.609841] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:10:52.885 [2024-09-30 23:28:32.609988] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:10:52.885 [2024-09-30 23:28:32.610006] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:52.885 [2024-09-30 23:28:32.610276] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:10:52.885 [2024-09-30 23:28:32.610420] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:10:52.885 [2024-09-30 23:28:32.610447] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006d00 00:10:52.885 [2024-09-30 23:28:32.610578] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:52.885 pt4 00:10:52.885 23:28:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:52.885 23:28:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:10:52.885 23:28:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:52.885 23:28:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:52.885 23:28:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:52.885 23:28:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:52.885 23:28:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:52.885 23:28:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:52.885 23:28:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:52.885 23:28:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:52.885 23:28:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:52.885 23:28:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:52.885 23:28:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:52.885 23:28:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:52.885 23:28:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.885 23:28:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:52.885 23:28:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:52.885 "name": "raid_bdev1", 00:10:52.885 "uuid": "c31dbbee-efcc-4a4c-9661-024d89901d2f", 00:10:52.885 "strip_size_kb": 0, 00:10:52.885 "state": "online", 00:10:52.885 "raid_level": "raid1", 00:10:52.885 "superblock": true, 00:10:52.885 "num_base_bdevs": 4, 00:10:52.885 "num_base_bdevs_discovered": 3, 00:10:52.885 "num_base_bdevs_operational": 3, 00:10:52.885 "base_bdevs_list": [ 00:10:52.885 { 00:10:52.885 "name": null, 00:10:52.885 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:52.885 "is_configured": false, 00:10:52.885 "data_offset": 2048, 00:10:52.885 "data_size": 63488 00:10:52.885 }, 00:10:52.885 { 00:10:52.885 "name": "pt2", 00:10:52.885 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:52.885 "is_configured": true, 00:10:52.885 "data_offset": 2048, 00:10:52.885 "data_size": 63488 00:10:52.885 }, 00:10:52.885 { 00:10:52.885 "name": "pt3", 00:10:52.885 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:52.885 "is_configured": true, 00:10:52.885 "data_offset": 2048, 00:10:52.885 "data_size": 63488 00:10:52.885 }, 00:10:52.885 { 00:10:52.885 "name": "pt4", 00:10:52.885 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:52.885 "is_configured": true, 00:10:52.885 "data_offset": 2048, 00:10:52.885 "data_size": 63488 00:10:52.885 } 00:10:52.885 ] 00:10:52.885 }' 00:10:52.885 23:28:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:52.885 23:28:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.454 23:28:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:53.454 23:28:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:53.454 23:28:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.454 [2024-09-30 23:28:33.060220] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:53.454 [2024-09-30 23:28:33.060344] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:53.454 [2024-09-30 23:28:33.060455] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:53.454 [2024-09-30 23:28:33.060560] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:53.454 [2024-09-30 23:28:33.060626] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name raid_bdev1, state offline 00:10:53.454 23:28:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:53.454 23:28:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:53.454 23:28:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:10:53.454 23:28:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:53.454 23:28:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.454 23:28:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:53.454 23:28:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:10:53.454 23:28:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:10:53.454 23:28:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 4 -gt 2 ']' 00:10:53.454 23:28:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # i=3 00:10:53.454 23:28:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt4 00:10:53.454 23:28:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:53.455 23:28:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.455 23:28:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:53.455 23:28:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:53.455 23:28:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:53.455 23:28:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.455 [2024-09-30 23:28:33.132083] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:53.455 [2024-09-30 23:28:33.132227] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:53.455 [2024-09-30 23:28:33.132308] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:10:53.455 [2024-09-30 23:28:33.132365] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:53.455 [2024-09-30 23:28:33.134680] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:53.455 [2024-09-30 23:28:33.134768] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:53.455 [2024-09-30 23:28:33.134892] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:10:53.455 [2024-09-30 23:28:33.134972] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:53.455 [2024-09-30 23:28:33.135160] bdev_raid.c:3675:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:10:53.455 [2024-09-30 23:28:33.135240] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:53.455 [2024-09-30 23:28:33.135275] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007080 name raid_bdev1, state configuring 00:10:53.455 [2024-09-30 23:28:33.135328] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:53.455 [2024-09-30 23:28:33.135440] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:53.455 pt1 00:10:53.455 23:28:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:53.455 23:28:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 4 -gt 2 ']' 00:10:53.455 23:28:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:10:53.455 23:28:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:53.455 23:28:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:53.455 23:28:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:53.455 23:28:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:53.455 23:28:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:53.455 23:28:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:53.455 23:28:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:53.455 23:28:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:53.455 23:28:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:53.455 23:28:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:53.455 23:28:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:53.455 23:28:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:53.455 23:28:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.455 23:28:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:53.455 23:28:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:53.455 "name": "raid_bdev1", 00:10:53.455 "uuid": "c31dbbee-efcc-4a4c-9661-024d89901d2f", 00:10:53.455 "strip_size_kb": 0, 00:10:53.455 "state": "configuring", 00:10:53.455 "raid_level": "raid1", 00:10:53.455 "superblock": true, 00:10:53.455 "num_base_bdevs": 4, 00:10:53.455 "num_base_bdevs_discovered": 2, 00:10:53.455 "num_base_bdevs_operational": 3, 00:10:53.455 "base_bdevs_list": [ 00:10:53.455 { 00:10:53.455 "name": null, 00:10:53.455 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:53.455 "is_configured": false, 00:10:53.455 "data_offset": 2048, 00:10:53.455 "data_size": 63488 00:10:53.455 }, 00:10:53.455 { 00:10:53.455 "name": "pt2", 00:10:53.455 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:53.455 "is_configured": true, 00:10:53.455 "data_offset": 2048, 00:10:53.455 "data_size": 63488 00:10:53.455 }, 00:10:53.455 { 00:10:53.455 "name": "pt3", 00:10:53.455 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:53.455 "is_configured": true, 00:10:53.455 "data_offset": 2048, 00:10:53.455 "data_size": 63488 00:10:53.455 }, 00:10:53.455 { 00:10:53.455 "name": null, 00:10:53.455 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:53.455 "is_configured": false, 00:10:53.455 "data_offset": 2048, 00:10:53.455 "data_size": 63488 00:10:53.455 } 00:10:53.455 ] 00:10:53.455 }' 00:10:53.455 23:28:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:53.455 23:28:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.714 23:28:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:10:53.715 23:28:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:10:53.715 23:28:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:53.715 23:28:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.974 23:28:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:53.974 23:28:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:10:53.974 23:28:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:10:53.974 23:28:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:53.974 23:28:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.974 [2024-09-30 23:28:33.595285] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:10:53.974 [2024-09-30 23:28:33.595441] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:53.974 [2024-09-30 23:28:33.595485] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:10:53.974 [2024-09-30 23:28:33.595523] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:53.974 [2024-09-30 23:28:33.595991] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:53.974 [2024-09-30 23:28:33.596024] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:10:53.974 [2024-09-30 23:28:33.596101] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:10:53.974 [2024-09-30 23:28:33.596129] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:10:53.974 [2024-09-30 23:28:33.596231] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007400 00:10:53.974 [2024-09-30 23:28:33.596246] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:53.974 [2024-09-30 23:28:33.596486] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:10:53.974 [2024-09-30 23:28:33.596610] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007400 00:10:53.974 [2024-09-30 23:28:33.596634] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007400 00:10:53.975 [2024-09-30 23:28:33.596751] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:53.975 pt4 00:10:53.975 23:28:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:53.975 23:28:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:10:53.975 23:28:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:53.975 23:28:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:53.975 23:28:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:53.975 23:28:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:53.975 23:28:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:53.975 23:28:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:53.975 23:28:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:53.975 23:28:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:53.975 23:28:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:53.975 23:28:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:53.975 23:28:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:53.975 23:28:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:53.975 23:28:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.975 23:28:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:53.975 23:28:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:53.975 "name": "raid_bdev1", 00:10:53.975 "uuid": "c31dbbee-efcc-4a4c-9661-024d89901d2f", 00:10:53.975 "strip_size_kb": 0, 00:10:53.975 "state": "online", 00:10:53.975 "raid_level": "raid1", 00:10:53.975 "superblock": true, 00:10:53.975 "num_base_bdevs": 4, 00:10:53.975 "num_base_bdevs_discovered": 3, 00:10:53.975 "num_base_bdevs_operational": 3, 00:10:53.975 "base_bdevs_list": [ 00:10:53.975 { 00:10:53.975 "name": null, 00:10:53.975 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:53.975 "is_configured": false, 00:10:53.975 "data_offset": 2048, 00:10:53.975 "data_size": 63488 00:10:53.975 }, 00:10:53.975 { 00:10:53.975 "name": "pt2", 00:10:53.975 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:53.975 "is_configured": true, 00:10:53.975 "data_offset": 2048, 00:10:53.975 "data_size": 63488 00:10:53.975 }, 00:10:53.975 { 00:10:53.975 "name": "pt3", 00:10:53.975 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:53.975 "is_configured": true, 00:10:53.975 "data_offset": 2048, 00:10:53.975 "data_size": 63488 00:10:53.975 }, 00:10:53.975 { 00:10:53.975 "name": "pt4", 00:10:53.975 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:53.975 "is_configured": true, 00:10:53.975 "data_offset": 2048, 00:10:53.975 "data_size": 63488 00:10:53.975 } 00:10:53.975 ] 00:10:53.975 }' 00:10:53.975 23:28:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:53.975 23:28:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.234 23:28:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:10:54.234 23:28:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:10:54.234 23:28:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.234 23:28:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.234 23:28:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.234 23:28:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:10:54.234 23:28:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:54.234 23:28:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.234 23:28:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.234 23:28:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:10:54.234 [2024-09-30 23:28:34.046995] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:54.234 23:28:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.234 23:28:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' c31dbbee-efcc-4a4c-9661-024d89901d2f '!=' c31dbbee-efcc-4a4c-9661-024d89901d2f ']' 00:10:54.234 23:28:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 85319 00:10:54.234 23:28:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 85319 ']' 00:10:54.234 23:28:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 85319 00:10:54.234 23:28:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:10:54.493 23:28:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:54.493 23:28:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 85319 00:10:54.493 23:28:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:54.493 23:28:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:54.493 23:28:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 85319' 00:10:54.493 killing process with pid 85319 00:10:54.493 23:28:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 85319 00:10:54.493 [2024-09-30 23:28:34.121328] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:54.493 [2024-09-30 23:28:34.121486] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:54.493 23:28:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 85319 00:10:54.494 [2024-09-30 23:28:34.121607] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:54.494 [2024-09-30 23:28:34.121700] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name raid_bdev1, state offline 00:10:54.494 [2024-09-30 23:28:34.165872] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:54.753 23:28:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:10:54.753 00:10:54.753 real 0m7.034s 00:10:54.753 user 0m11.746s 00:10:54.753 sys 0m1.506s 00:10:54.753 ************************************ 00:10:54.753 END TEST raid_superblock_test 00:10:54.753 ************************************ 00:10:54.753 23:28:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:54.753 23:28:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.753 23:28:34 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 4 read 00:10:54.753 23:28:34 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:10:54.753 23:28:34 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:54.753 23:28:34 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:54.753 ************************************ 00:10:54.753 START TEST raid_read_error_test 00:10:54.753 ************************************ 00:10:54.753 23:28:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid1 4 read 00:10:54.753 23:28:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:10:54.753 23:28:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:10:54.753 23:28:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:10:54.753 23:28:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:10:54.753 23:28:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:54.753 23:28:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:10:54.753 23:28:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:54.753 23:28:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:54.753 23:28:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:10:54.753 23:28:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:54.753 23:28:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:54.753 23:28:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:10:54.753 23:28:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:54.753 23:28:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:54.753 23:28:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:10:54.753 23:28:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:54.753 23:28:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:54.753 23:28:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:10:54.753 23:28:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:10:54.753 23:28:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:10:54.754 23:28:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:10:54.754 23:28:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:10:54.754 23:28:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:10:54.754 23:28:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:10:54.754 23:28:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:10:54.754 23:28:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:10:54.754 23:28:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:10:54.754 23:28:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.aKPJcNgz5I 00:10:54.754 23:28:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=85795 00:10:54.754 23:28:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:54.754 23:28:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 85795 00:10:54.754 23:28:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 85795 ']' 00:10:54.754 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:54.754 23:28:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:54.754 23:28:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:54.754 23:28:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:54.754 23:28:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:54.754 23:28:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.754 [2024-09-30 23:28:34.600619] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:10:54.754 [2024-09-30 23:28:34.600755] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85795 ] 00:10:55.013 [2024-09-30 23:28:34.768158] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:55.013 [2024-09-30 23:28:34.812834] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:55.013 [2024-09-30 23:28:34.856406] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:55.013 [2024-09-30 23:28:34.856446] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:55.581 23:28:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:55.581 23:28:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:10:55.581 23:28:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:55.581 23:28:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:55.581 23:28:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.581 23:28:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.841 BaseBdev1_malloc 00:10:55.841 23:28:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.841 23:28:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:10:55.841 23:28:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.841 23:28:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.841 true 00:10:55.841 23:28:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.841 23:28:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:55.841 23:28:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.842 23:28:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.842 [2024-09-30 23:28:35.463551] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:55.842 [2024-09-30 23:28:35.463637] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:55.842 [2024-09-30 23:28:35.463679] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:10:55.842 [2024-09-30 23:28:35.463690] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:55.842 [2024-09-30 23:28:35.466017] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:55.842 [2024-09-30 23:28:35.466061] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:55.842 BaseBdev1 00:10:55.842 23:28:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.842 23:28:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:55.842 23:28:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:55.842 23:28:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.842 23:28:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.842 BaseBdev2_malloc 00:10:55.842 23:28:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.842 23:28:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:10:55.842 23:28:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.842 23:28:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.842 true 00:10:55.842 23:28:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.842 23:28:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:55.842 23:28:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.842 23:28:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.842 [2024-09-30 23:28:35.514953] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:55.842 [2024-09-30 23:28:35.515019] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:55.842 [2024-09-30 23:28:35.515057] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:10:55.842 [2024-09-30 23:28:35.515068] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:55.842 [2024-09-30 23:28:35.517221] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:55.842 [2024-09-30 23:28:35.517265] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:55.842 BaseBdev2 00:10:55.842 23:28:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.842 23:28:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:55.842 23:28:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:10:55.842 23:28:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.842 23:28:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.842 BaseBdev3_malloc 00:10:55.842 23:28:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.842 23:28:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:10:55.842 23:28:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.842 23:28:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.842 true 00:10:55.842 23:28:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.842 23:28:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:10:55.842 23:28:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.842 23:28:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.842 [2024-09-30 23:28:35.556006] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:10:55.842 [2024-09-30 23:28:35.556148] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:55.842 [2024-09-30 23:28:35.556174] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:10:55.842 [2024-09-30 23:28:35.556186] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:55.842 [2024-09-30 23:28:35.558252] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:55.842 [2024-09-30 23:28:35.558295] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:10:55.842 BaseBdev3 00:10:55.842 23:28:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.842 23:28:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:55.842 23:28:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:10:55.842 23:28:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.842 23:28:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.842 BaseBdev4_malloc 00:10:55.842 23:28:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.842 23:28:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:10:55.842 23:28:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.842 23:28:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.842 true 00:10:55.842 23:28:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.842 23:28:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:10:55.842 23:28:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.842 23:28:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.842 [2024-09-30 23:28:35.596859] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:10:55.842 [2024-09-30 23:28:35.596931] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:55.842 [2024-09-30 23:28:35.596972] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:10:55.842 [2024-09-30 23:28:35.596983] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:55.842 [2024-09-30 23:28:35.599039] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:55.842 [2024-09-30 23:28:35.599162] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:10:55.842 BaseBdev4 00:10:55.842 23:28:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.842 23:28:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:10:55.842 23:28:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.842 23:28:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.842 [2024-09-30 23:28:35.608911] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:55.842 [2024-09-30 23:28:35.610835] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:55.842 [2024-09-30 23:28:35.610950] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:55.842 [2024-09-30 23:28:35.611008] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:55.842 [2024-09-30 23:28:35.611243] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007080 00:10:55.842 [2024-09-30 23:28:35.611257] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:55.842 [2024-09-30 23:28:35.611538] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:10:55.842 [2024-09-30 23:28:35.611704] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007080 00:10:55.842 [2024-09-30 23:28:35.611727] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007080 00:10:55.842 [2024-09-30 23:28:35.611891] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:55.842 23:28:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.842 23:28:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:10:55.842 23:28:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:55.842 23:28:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:55.842 23:28:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:55.842 23:28:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:55.842 23:28:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:55.842 23:28:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:55.842 23:28:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:55.842 23:28:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:55.842 23:28:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:55.842 23:28:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:55.842 23:28:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:55.842 23:28:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.842 23:28:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.842 23:28:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.842 23:28:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:55.842 "name": "raid_bdev1", 00:10:55.842 "uuid": "57d81d1c-17b8-4601-981a-4d1ebb0e6405", 00:10:55.842 "strip_size_kb": 0, 00:10:55.842 "state": "online", 00:10:55.842 "raid_level": "raid1", 00:10:55.842 "superblock": true, 00:10:55.842 "num_base_bdevs": 4, 00:10:55.842 "num_base_bdevs_discovered": 4, 00:10:55.842 "num_base_bdevs_operational": 4, 00:10:55.842 "base_bdevs_list": [ 00:10:55.842 { 00:10:55.842 "name": "BaseBdev1", 00:10:55.842 "uuid": "6cc54847-8236-5e3f-830c-b29eda132e5c", 00:10:55.842 "is_configured": true, 00:10:55.842 "data_offset": 2048, 00:10:55.842 "data_size": 63488 00:10:55.842 }, 00:10:55.842 { 00:10:55.842 "name": "BaseBdev2", 00:10:55.842 "uuid": "036d35a8-0f9c-52cd-af9d-4d706b12835c", 00:10:55.842 "is_configured": true, 00:10:55.842 "data_offset": 2048, 00:10:55.843 "data_size": 63488 00:10:55.843 }, 00:10:55.843 { 00:10:55.843 "name": "BaseBdev3", 00:10:55.843 "uuid": "1614b32d-d093-570c-b1a0-b24cbe353180", 00:10:55.843 "is_configured": true, 00:10:55.843 "data_offset": 2048, 00:10:55.843 "data_size": 63488 00:10:55.843 }, 00:10:55.843 { 00:10:55.843 "name": "BaseBdev4", 00:10:55.843 "uuid": "f8a15279-08f0-538c-8ca0-e6f3de6d562c", 00:10:55.843 "is_configured": true, 00:10:55.843 "data_offset": 2048, 00:10:55.843 "data_size": 63488 00:10:55.843 } 00:10:55.843 ] 00:10:55.843 }' 00:10:55.843 23:28:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:55.843 23:28:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.411 23:28:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:10:56.411 23:28:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:10:56.411 [2024-09-30 23:28:36.092371] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:10:57.349 23:28:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:10:57.349 23:28:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.349 23:28:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.349 23:28:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.349 23:28:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:10:57.349 23:28:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:10:57.349 23:28:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:10:57.349 23:28:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:10:57.349 23:28:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:10:57.349 23:28:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:57.349 23:28:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:57.349 23:28:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:57.349 23:28:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:57.349 23:28:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:57.349 23:28:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:57.349 23:28:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:57.349 23:28:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:57.349 23:28:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:57.349 23:28:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:57.349 23:28:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:57.349 23:28:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.349 23:28:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.349 23:28:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.349 23:28:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:57.349 "name": "raid_bdev1", 00:10:57.349 "uuid": "57d81d1c-17b8-4601-981a-4d1ebb0e6405", 00:10:57.349 "strip_size_kb": 0, 00:10:57.349 "state": "online", 00:10:57.349 "raid_level": "raid1", 00:10:57.349 "superblock": true, 00:10:57.349 "num_base_bdevs": 4, 00:10:57.349 "num_base_bdevs_discovered": 4, 00:10:57.349 "num_base_bdevs_operational": 4, 00:10:57.349 "base_bdevs_list": [ 00:10:57.349 { 00:10:57.349 "name": "BaseBdev1", 00:10:57.349 "uuid": "6cc54847-8236-5e3f-830c-b29eda132e5c", 00:10:57.349 "is_configured": true, 00:10:57.349 "data_offset": 2048, 00:10:57.349 "data_size": 63488 00:10:57.349 }, 00:10:57.349 { 00:10:57.349 "name": "BaseBdev2", 00:10:57.349 "uuid": "036d35a8-0f9c-52cd-af9d-4d706b12835c", 00:10:57.349 "is_configured": true, 00:10:57.349 "data_offset": 2048, 00:10:57.349 "data_size": 63488 00:10:57.349 }, 00:10:57.349 { 00:10:57.349 "name": "BaseBdev3", 00:10:57.349 "uuid": "1614b32d-d093-570c-b1a0-b24cbe353180", 00:10:57.349 "is_configured": true, 00:10:57.349 "data_offset": 2048, 00:10:57.349 "data_size": 63488 00:10:57.349 }, 00:10:57.349 { 00:10:57.349 "name": "BaseBdev4", 00:10:57.349 "uuid": "f8a15279-08f0-538c-8ca0-e6f3de6d562c", 00:10:57.349 "is_configured": true, 00:10:57.349 "data_offset": 2048, 00:10:57.349 "data_size": 63488 00:10:57.349 } 00:10:57.349 ] 00:10:57.349 }' 00:10:57.349 23:28:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:57.349 23:28:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.944 23:28:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:57.944 23:28:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.944 23:28:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.944 [2024-09-30 23:28:37.510284] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:57.944 [2024-09-30 23:28:37.510330] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:57.944 [2024-09-30 23:28:37.512771] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:57.944 [2024-09-30 23:28:37.512833] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:57.944 [2024-09-30 23:28:37.513096] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:57.944 [2024-09-30 23:28:37.513145] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007080 name raid_bdev1, state offline 00:10:57.944 23:28:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.944 { 00:10:57.944 "results": [ 00:10:57.944 { 00:10:57.944 "job": "raid_bdev1", 00:10:57.944 "core_mask": "0x1", 00:10:57.944 "workload": "randrw", 00:10:57.944 "percentage": 50, 00:10:57.944 "status": "finished", 00:10:57.944 "queue_depth": 1, 00:10:57.944 "io_size": 131072, 00:10:57.944 "runtime": 1.418759, 00:10:57.944 "iops": 11128.739976275041, 00:10:57.944 "mibps": 1391.0924970343801, 00:10:57.944 "io_failed": 0, 00:10:57.944 "io_timeout": 0, 00:10:57.944 "avg_latency_us": 87.02721019912985, 00:10:57.944 "min_latency_us": 23.58777292576419, 00:10:57.944 "max_latency_us": 1473.844541484716 00:10:57.944 } 00:10:57.944 ], 00:10:57.944 "core_count": 1 00:10:57.944 } 00:10:57.944 23:28:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 85795 00:10:57.944 23:28:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 85795 ']' 00:10:57.944 23:28:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 85795 00:10:57.944 23:28:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:10:57.944 23:28:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:57.944 23:28:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 85795 00:10:57.944 23:28:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:57.944 23:28:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:57.944 23:28:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 85795' 00:10:57.944 killing process with pid 85795 00:10:57.944 23:28:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 85795 00:10:57.944 [2024-09-30 23:28:37.559704] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:57.944 23:28:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 85795 00:10:57.944 [2024-09-30 23:28:37.595279] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:58.212 23:28:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:10:58.212 23:28:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.aKPJcNgz5I 00:10:58.212 23:28:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:10:58.212 23:28:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:10:58.212 23:28:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:10:58.212 23:28:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:58.212 23:28:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:10:58.212 23:28:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:10:58.212 00:10:58.212 real 0m3.360s 00:10:58.212 user 0m4.176s 00:10:58.212 sys 0m0.595s 00:10:58.212 23:28:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:58.212 23:28:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.212 ************************************ 00:10:58.212 END TEST raid_read_error_test 00:10:58.212 ************************************ 00:10:58.212 23:28:37 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 4 write 00:10:58.212 23:28:37 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:10:58.212 23:28:37 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:58.212 23:28:37 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:58.212 ************************************ 00:10:58.212 START TEST raid_write_error_test 00:10:58.212 ************************************ 00:10:58.212 23:28:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid1 4 write 00:10:58.212 23:28:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:10:58.212 23:28:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:10:58.212 23:28:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:10:58.212 23:28:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:10:58.212 23:28:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:58.212 23:28:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:10:58.212 23:28:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:58.212 23:28:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:58.212 23:28:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:10:58.212 23:28:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:58.212 23:28:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:58.212 23:28:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:10:58.213 23:28:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:58.213 23:28:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:58.213 23:28:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:10:58.213 23:28:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:58.213 23:28:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:58.213 23:28:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:10:58.213 23:28:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:10:58.213 23:28:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:10:58.213 23:28:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:10:58.213 23:28:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:10:58.213 23:28:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:10:58.213 23:28:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:10:58.213 23:28:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:10:58.213 23:28:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:10:58.213 23:28:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:10:58.213 23:28:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.4qsbiYNbnW 00:10:58.213 23:28:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=85924 00:10:58.213 23:28:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:58.213 23:28:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 85924 00:10:58.213 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:58.213 23:28:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 85924 ']' 00:10:58.213 23:28:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:58.213 23:28:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:58.213 23:28:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:58.213 23:28:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:58.213 23:28:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.213 [2024-09-30 23:28:38.033688] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:10:58.213 [2024-09-30 23:28:38.033818] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85924 ] 00:10:58.476 [2024-09-30 23:28:38.194811] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:58.476 [2024-09-30 23:28:38.239286] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:58.476 [2024-09-30 23:28:38.283305] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:58.476 [2024-09-30 23:28:38.283354] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:59.054 23:28:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:59.054 23:28:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:10:59.054 23:28:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:59.054 23:28:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:59.054 23:28:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:59.054 23:28:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.320 BaseBdev1_malloc 00:10:59.320 23:28:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:59.320 23:28:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:10:59.320 23:28:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:59.320 23:28:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.320 true 00:10:59.320 23:28:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:59.320 23:28:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:59.320 23:28:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:59.320 23:28:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.320 [2024-09-30 23:28:38.942665] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:59.320 [2024-09-30 23:28:38.942826] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:59.320 [2024-09-30 23:28:38.942901] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:10:59.320 [2024-09-30 23:28:38.942913] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:59.320 [2024-09-30 23:28:38.945192] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:59.320 [2024-09-30 23:28:38.945238] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:59.320 BaseBdev1 00:10:59.320 23:28:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:59.320 23:28:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:59.320 23:28:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:59.320 23:28:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:59.320 23:28:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.320 BaseBdev2_malloc 00:10:59.320 23:28:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:59.320 23:28:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:10:59.320 23:28:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:59.320 23:28:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.320 true 00:10:59.320 23:28:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:59.320 23:28:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:59.320 23:28:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:59.320 23:28:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.320 [2024-09-30 23:28:38.994275] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:59.320 [2024-09-30 23:28:38.994420] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:59.320 [2024-09-30 23:28:38.994446] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:10:59.320 [2024-09-30 23:28:38.994456] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:59.320 [2024-09-30 23:28:38.996479] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:59.320 [2024-09-30 23:28:38.996537] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:59.320 BaseBdev2 00:10:59.320 23:28:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:59.320 23:28:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:59.320 23:28:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:10:59.320 23:28:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:59.320 23:28:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.320 BaseBdev3_malloc 00:10:59.320 23:28:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:59.320 23:28:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:10:59.321 23:28:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:59.321 23:28:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.321 true 00:10:59.321 23:28:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:59.321 23:28:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:10:59.321 23:28:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:59.321 23:28:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.321 [2024-09-30 23:28:39.034976] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:10:59.321 [2024-09-30 23:28:39.035036] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:59.321 [2024-09-30 23:28:39.035073] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:10:59.321 [2024-09-30 23:28:39.035089] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:59.321 [2024-09-30 23:28:39.037122] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:59.321 [2024-09-30 23:28:39.037246] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:10:59.321 BaseBdev3 00:10:59.321 23:28:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:59.321 23:28:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:59.321 23:28:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:10:59.321 23:28:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:59.321 23:28:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.321 BaseBdev4_malloc 00:10:59.321 23:28:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:59.321 23:28:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:10:59.321 23:28:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:59.321 23:28:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.321 true 00:10:59.321 23:28:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:59.321 23:28:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:10:59.321 23:28:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:59.321 23:28:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.321 [2024-09-30 23:28:39.075690] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:10:59.321 [2024-09-30 23:28:39.075748] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:59.321 [2024-09-30 23:28:39.075771] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:10:59.321 [2024-09-30 23:28:39.075782] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:59.321 [2024-09-30 23:28:39.077823] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:59.321 [2024-09-30 23:28:39.077881] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:10:59.321 BaseBdev4 00:10:59.321 23:28:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:59.321 23:28:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:10:59.321 23:28:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:59.322 23:28:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.322 [2024-09-30 23:28:39.087739] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:59.322 [2024-09-30 23:28:39.089673] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:59.322 [2024-09-30 23:28:39.089766] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:59.322 [2024-09-30 23:28:39.089824] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:59.322 [2024-09-30 23:28:39.090063] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007080 00:10:59.322 [2024-09-30 23:28:39.090078] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:59.322 [2024-09-30 23:28:39.090327] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:10:59.322 [2024-09-30 23:28:39.090514] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007080 00:10:59.322 [2024-09-30 23:28:39.090530] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007080 00:10:59.322 [2024-09-30 23:28:39.090680] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:59.322 23:28:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:59.322 23:28:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:10:59.322 23:28:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:59.322 23:28:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:59.322 23:28:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:59.322 23:28:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:59.322 23:28:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:59.322 23:28:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:59.322 23:28:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:59.322 23:28:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:59.322 23:28:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:59.322 23:28:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:59.322 23:28:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:59.322 23:28:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.322 23:28:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:59.322 23:28:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:59.322 23:28:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:59.322 "name": "raid_bdev1", 00:10:59.322 "uuid": "3ebfc084-f0d8-405d-ada1-11aed3d187e4", 00:10:59.322 "strip_size_kb": 0, 00:10:59.322 "state": "online", 00:10:59.322 "raid_level": "raid1", 00:10:59.322 "superblock": true, 00:10:59.322 "num_base_bdevs": 4, 00:10:59.322 "num_base_bdevs_discovered": 4, 00:10:59.322 "num_base_bdevs_operational": 4, 00:10:59.322 "base_bdevs_list": [ 00:10:59.322 { 00:10:59.322 "name": "BaseBdev1", 00:10:59.322 "uuid": "d28f0de7-b87e-5bfd-80ba-4d667e447810", 00:10:59.322 "is_configured": true, 00:10:59.322 "data_offset": 2048, 00:10:59.322 "data_size": 63488 00:10:59.322 }, 00:10:59.322 { 00:10:59.322 "name": "BaseBdev2", 00:10:59.322 "uuid": "f699c6b1-acf5-56af-8452-793ee408367c", 00:10:59.322 "is_configured": true, 00:10:59.322 "data_offset": 2048, 00:10:59.322 "data_size": 63488 00:10:59.322 }, 00:10:59.322 { 00:10:59.322 "name": "BaseBdev3", 00:10:59.322 "uuid": "66278e6b-d25b-5aeb-8cb4-9017e53ae5b0", 00:10:59.322 "is_configured": true, 00:10:59.322 "data_offset": 2048, 00:10:59.322 "data_size": 63488 00:10:59.322 }, 00:10:59.322 { 00:10:59.322 "name": "BaseBdev4", 00:10:59.322 "uuid": "66b0fa99-d643-5f3b-86ca-f073eef47e7f", 00:10:59.322 "is_configured": true, 00:10:59.322 "data_offset": 2048, 00:10:59.322 "data_size": 63488 00:10:59.322 } 00:10:59.322 ] 00:10:59.322 }' 00:10:59.322 23:28:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:59.322 23:28:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.900 23:28:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:10:59.900 23:28:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:10:59.900 [2024-09-30 23:28:39.639198] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:11:00.836 23:28:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:11:00.836 23:28:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.836 23:28:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.836 [2024-09-30 23:28:40.553407] bdev_raid.c:2272:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:11:00.836 [2024-09-30 23:28:40.553482] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:00.836 [2024-09-30 23:28:40.553720] bdev_raid.c:1970:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000005ee0 00:11:00.836 23:28:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.836 23:28:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:11:00.836 23:28:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:11:00.836 23:28:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:11:00.836 23:28:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=3 00:11:00.836 23:28:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:11:00.836 23:28:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:00.836 23:28:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:00.836 23:28:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:00.836 23:28:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:00.836 23:28:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:00.836 23:28:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:00.836 23:28:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:00.836 23:28:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:00.836 23:28:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:00.836 23:28:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:00.836 23:28:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:00.836 23:28:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.836 23:28:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.836 23:28:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.836 23:28:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:00.836 "name": "raid_bdev1", 00:11:00.836 "uuid": "3ebfc084-f0d8-405d-ada1-11aed3d187e4", 00:11:00.836 "strip_size_kb": 0, 00:11:00.836 "state": "online", 00:11:00.836 "raid_level": "raid1", 00:11:00.836 "superblock": true, 00:11:00.836 "num_base_bdevs": 4, 00:11:00.836 "num_base_bdevs_discovered": 3, 00:11:00.836 "num_base_bdevs_operational": 3, 00:11:00.836 "base_bdevs_list": [ 00:11:00.836 { 00:11:00.836 "name": null, 00:11:00.836 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:00.836 "is_configured": false, 00:11:00.836 "data_offset": 0, 00:11:00.836 "data_size": 63488 00:11:00.836 }, 00:11:00.836 { 00:11:00.836 "name": "BaseBdev2", 00:11:00.836 "uuid": "f699c6b1-acf5-56af-8452-793ee408367c", 00:11:00.836 "is_configured": true, 00:11:00.836 "data_offset": 2048, 00:11:00.836 "data_size": 63488 00:11:00.836 }, 00:11:00.836 { 00:11:00.836 "name": "BaseBdev3", 00:11:00.836 "uuid": "66278e6b-d25b-5aeb-8cb4-9017e53ae5b0", 00:11:00.836 "is_configured": true, 00:11:00.836 "data_offset": 2048, 00:11:00.836 "data_size": 63488 00:11:00.836 }, 00:11:00.836 { 00:11:00.836 "name": "BaseBdev4", 00:11:00.836 "uuid": "66b0fa99-d643-5f3b-86ca-f073eef47e7f", 00:11:00.836 "is_configured": true, 00:11:00.836 "data_offset": 2048, 00:11:00.836 "data_size": 63488 00:11:00.836 } 00:11:00.836 ] 00:11:00.836 }' 00:11:00.836 23:28:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:00.836 23:28:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.404 23:28:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:01.404 23:28:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.404 23:28:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.404 [2024-09-30 23:28:41.000139] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:01.404 [2024-09-30 23:28:41.000290] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:01.404 [2024-09-30 23:28:41.002917] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:01.404 [2024-09-30 23:28:41.003026] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:01.404 [2024-09-30 23:28:41.003155] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:01.404 [2024-09-30 23:28:41.003227] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007080 name raid_bdev1, state offline 00:11:01.404 { 00:11:01.404 "results": [ 00:11:01.404 { 00:11:01.404 "job": "raid_bdev1", 00:11:01.404 "core_mask": "0x1", 00:11:01.404 "workload": "randrw", 00:11:01.404 "percentage": 50, 00:11:01.404 "status": "finished", 00:11:01.404 "queue_depth": 1, 00:11:01.405 "io_size": 131072, 00:11:01.405 "runtime": 1.361915, 00:11:01.405 "iops": 12454.521757965806, 00:11:01.405 "mibps": 1556.8152197457257, 00:11:01.405 "io_failed": 0, 00:11:01.405 "io_timeout": 0, 00:11:01.405 "avg_latency_us": 77.52876303517392, 00:11:01.405 "min_latency_us": 22.91703056768559, 00:11:01.405 "max_latency_us": 1416.6078602620087 00:11:01.405 } 00:11:01.405 ], 00:11:01.405 "core_count": 1 00:11:01.405 } 00:11:01.405 23:28:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.405 23:28:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 85924 00:11:01.405 23:28:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 85924 ']' 00:11:01.405 23:28:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 85924 00:11:01.405 23:28:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:11:01.405 23:28:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:01.405 23:28:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 85924 00:11:01.405 23:28:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:01.405 23:28:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:01.405 23:28:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 85924' 00:11:01.405 killing process with pid 85924 00:11:01.405 23:28:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 85924 00:11:01.405 [2024-09-30 23:28:41.048911] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:01.405 23:28:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 85924 00:11:01.405 [2024-09-30 23:28:41.083632] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:01.665 23:28:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.4qsbiYNbnW 00:11:01.665 23:28:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:11:01.665 23:28:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:11:01.665 23:28:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:11:01.665 23:28:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:11:01.665 ************************************ 00:11:01.665 END TEST raid_write_error_test 00:11:01.665 ************************************ 00:11:01.665 23:28:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:01.665 23:28:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:11:01.665 23:28:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:11:01.665 00:11:01.665 real 0m3.407s 00:11:01.665 user 0m4.285s 00:11:01.665 sys 0m0.584s 00:11:01.665 23:28:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:01.665 23:28:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.665 23:28:41 bdev_raid -- bdev/bdev_raid.sh@976 -- # '[' true = true ']' 00:11:01.665 23:28:41 bdev_raid -- bdev/bdev_raid.sh@977 -- # for n in 2 4 00:11:01.665 23:28:41 bdev_raid -- bdev/bdev_raid.sh@978 -- # run_test raid_rebuild_test raid_rebuild_test raid1 2 false false true 00:11:01.665 23:28:41 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:11:01.665 23:28:41 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:01.665 23:28:41 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:01.665 ************************************ 00:11:01.665 START TEST raid_rebuild_test 00:11:01.665 ************************************ 00:11:01.665 23:28:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 2 false false true 00:11:01.665 23:28:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:11:01.665 23:28:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:11:01.665 23:28:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:11:01.665 23:28:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:11:01.665 23:28:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:11:01.665 23:28:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:11:01.665 23:28:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:11:01.665 23:28:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:11:01.665 23:28:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:11:01.665 23:28:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:11:01.665 23:28:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:11:01.665 23:28:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:11:01.665 23:28:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:11:01.665 23:28:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:11:01.665 23:28:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:11:01.665 23:28:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:11:01.666 23:28:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:11:01.666 23:28:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:11:01.666 23:28:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:11:01.666 23:28:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:11:01.666 23:28:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:11:01.666 23:28:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:11:01.666 23:28:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:11:01.666 23:28:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=86051 00:11:01.666 23:28:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:11:01.666 23:28:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 86051 00:11:01.666 23:28:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@831 -- # '[' -z 86051 ']' 00:11:01.666 23:28:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:01.666 23:28:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:01.666 23:28:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:01.666 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:01.666 23:28:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:01.666 23:28:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.666 [2024-09-30 23:28:41.503104] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:11:01.666 [2024-09-30 23:28:41.503675] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86051 ] 00:11:01.666 I/O size of 3145728 is greater than zero copy threshold (65536). 00:11:01.666 Zero copy mechanism will not be used. 00:11:01.925 [2024-09-30 23:28:41.662846] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:01.925 [2024-09-30 23:28:41.707283] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:11:01.925 [2024-09-30 23:28:41.750980] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:01.925 [2024-09-30 23:28:41.751113] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:02.494 23:28:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:02.494 23:28:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@864 -- # return 0 00:11:02.494 23:28:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:11:02.494 23:28:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:02.494 23:28:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.494 23:28:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.753 BaseBdev1_malloc 00:11:02.753 23:28:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.753 23:28:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:11:02.753 23:28:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.753 23:28:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.753 [2024-09-30 23:28:42.362036] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:11:02.753 [2024-09-30 23:28:42.362126] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:02.753 [2024-09-30 23:28:42.362159] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:11:02.753 [2024-09-30 23:28:42.362177] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:02.754 [2024-09-30 23:28:42.364474] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:02.754 [2024-09-30 23:28:42.364528] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:02.754 BaseBdev1 00:11:02.754 23:28:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.754 23:28:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:11:02.754 23:28:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:02.754 23:28:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.754 23:28:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.754 BaseBdev2_malloc 00:11:02.754 23:28:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.754 23:28:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:11:02.754 23:28:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.754 23:28:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.754 [2024-09-30 23:28:42.399986] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:11:02.754 [2024-09-30 23:28:42.400053] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:02.754 [2024-09-30 23:28:42.400080] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:11:02.754 [2024-09-30 23:28:42.400092] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:02.754 [2024-09-30 23:28:42.402299] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:02.754 [2024-09-30 23:28:42.402342] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:02.754 BaseBdev2 00:11:02.754 23:28:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.754 23:28:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:11:02.754 23:28:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.754 23:28:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.754 spare_malloc 00:11:02.754 23:28:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.754 23:28:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:11:02.754 23:28:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.754 23:28:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.754 spare_delay 00:11:02.754 23:28:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.754 23:28:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:11:02.754 23:28:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.754 23:28:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.754 [2024-09-30 23:28:42.440699] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:11:02.754 [2024-09-30 23:28:42.440766] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:02.754 [2024-09-30 23:28:42.440792] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:11:02.754 [2024-09-30 23:28:42.440803] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:02.754 [2024-09-30 23:28:42.442896] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:02.754 [2024-09-30 23:28:42.443010] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:11:02.754 spare 00:11:02.754 23:28:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.754 23:28:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:11:02.754 23:28:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.754 23:28:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.754 [2024-09-30 23:28:42.452725] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:02.754 [2024-09-30 23:28:42.454526] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:02.754 [2024-09-30 23:28:42.454692] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:11:02.754 [2024-09-30 23:28:42.454720] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:11:02.754 [2024-09-30 23:28:42.454983] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:11:02.754 [2024-09-30 23:28:42.455126] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:11:02.754 [2024-09-30 23:28:42.455143] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:11:02.754 [2024-09-30 23:28:42.455276] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:02.754 23:28:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.754 23:28:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:11:02.754 23:28:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:02.754 23:28:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:02.754 23:28:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:02.754 23:28:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:02.754 23:28:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:02.754 23:28:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:02.754 23:28:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:02.754 23:28:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:02.754 23:28:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:02.754 23:28:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:02.754 23:28:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:02.754 23:28:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.754 23:28:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.754 23:28:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.754 23:28:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:02.754 "name": "raid_bdev1", 00:11:02.754 "uuid": "60024486-113b-4493-8bd6-8f4f4dc8a922", 00:11:02.754 "strip_size_kb": 0, 00:11:02.754 "state": "online", 00:11:02.754 "raid_level": "raid1", 00:11:02.754 "superblock": false, 00:11:02.754 "num_base_bdevs": 2, 00:11:02.754 "num_base_bdevs_discovered": 2, 00:11:02.754 "num_base_bdevs_operational": 2, 00:11:02.754 "base_bdevs_list": [ 00:11:02.754 { 00:11:02.754 "name": "BaseBdev1", 00:11:02.754 "uuid": "c948aba0-da65-5abd-a4a1-b2cdb78698f4", 00:11:02.754 "is_configured": true, 00:11:02.754 "data_offset": 0, 00:11:02.754 "data_size": 65536 00:11:02.754 }, 00:11:02.754 { 00:11:02.754 "name": "BaseBdev2", 00:11:02.754 "uuid": "f9983d52-b3ed-575f-97ec-acf0bc54c587", 00:11:02.754 "is_configured": true, 00:11:02.754 "data_offset": 0, 00:11:02.754 "data_size": 65536 00:11:02.754 } 00:11:02.754 ] 00:11:02.754 }' 00:11:02.754 23:28:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:02.754 23:28:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.321 23:28:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:11:03.321 23:28:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:03.321 23:28:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:03.321 23:28:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.321 [2024-09-30 23:28:42.924186] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:03.321 23:28:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:03.321 23:28:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:11:03.321 23:28:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:03.321 23:28:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:03.321 23:28:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.321 23:28:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:11:03.321 23:28:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:03.321 23:28:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:11:03.321 23:28:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:11:03.321 23:28:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:11:03.321 23:28:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:11:03.321 23:28:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:11:03.321 23:28:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:11:03.321 23:28:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:11:03.321 23:28:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:11:03.321 23:28:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:11:03.321 23:28:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:11:03.321 23:28:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:11:03.321 23:28:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:11:03.321 23:28:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:11:03.321 23:28:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:11:03.580 [2024-09-30 23:28:43.175502] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:11:03.580 /dev/nbd0 00:11:03.580 23:28:43 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:11:03.580 23:28:43 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:11:03.580 23:28:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:11:03.580 23:28:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:11:03.580 23:28:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:11:03.580 23:28:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:11:03.580 23:28:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:11:03.580 23:28:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # break 00:11:03.580 23:28:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:11:03.580 23:28:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:11:03.580 23:28:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:03.580 1+0 records in 00:11:03.580 1+0 records out 00:11:03.580 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000420297 s, 9.7 MB/s 00:11:03.580 23:28:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:03.580 23:28:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:11:03.580 23:28:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:03.580 23:28:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:11:03.580 23:28:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:11:03.580 23:28:43 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:03.581 23:28:43 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:11:03.581 23:28:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:11:03.581 23:28:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:11:03.581 23:28:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:11:08.852 65536+0 records in 00:11:08.852 65536+0 records out 00:11:08.852 33554432 bytes (34 MB, 32 MiB) copied, 4.46994 s, 7.5 MB/s 00:11:08.852 23:28:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:11:08.852 23:28:47 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:11:08.852 23:28:47 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:11:08.852 23:28:47 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:11:08.852 23:28:47 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:11:08.852 23:28:47 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:08.852 23:28:47 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:11:08.852 23:28:47 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:11:08.852 23:28:47 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:11:08.852 23:28:47 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:11:08.852 23:28:47 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:08.852 23:28:47 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:08.852 23:28:47 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:11:08.852 [2024-09-30 23:28:47.927135] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:08.852 23:28:47 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:11:08.852 23:28:47 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:11:08.852 23:28:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:11:08.852 23:28:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:08.852 23:28:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.852 [2024-09-30 23:28:47.936308] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:08.852 23:28:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:08.852 23:28:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:11:08.852 23:28:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:08.852 23:28:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:08.852 23:28:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:08.852 23:28:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:08.852 23:28:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:11:08.852 23:28:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:08.852 23:28:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:08.852 23:28:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:08.852 23:28:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:08.852 23:28:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:08.852 23:28:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:08.852 23:28:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.852 23:28:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:08.852 23:28:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:08.852 23:28:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:08.852 "name": "raid_bdev1", 00:11:08.852 "uuid": "60024486-113b-4493-8bd6-8f4f4dc8a922", 00:11:08.852 "strip_size_kb": 0, 00:11:08.852 "state": "online", 00:11:08.852 "raid_level": "raid1", 00:11:08.852 "superblock": false, 00:11:08.852 "num_base_bdevs": 2, 00:11:08.852 "num_base_bdevs_discovered": 1, 00:11:08.852 "num_base_bdevs_operational": 1, 00:11:08.852 "base_bdevs_list": [ 00:11:08.852 { 00:11:08.852 "name": null, 00:11:08.852 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:08.852 "is_configured": false, 00:11:08.852 "data_offset": 0, 00:11:08.852 "data_size": 65536 00:11:08.852 }, 00:11:08.852 { 00:11:08.852 "name": "BaseBdev2", 00:11:08.852 "uuid": "f9983d52-b3ed-575f-97ec-acf0bc54c587", 00:11:08.852 "is_configured": true, 00:11:08.852 "data_offset": 0, 00:11:08.852 "data_size": 65536 00:11:08.852 } 00:11:08.852 ] 00:11:08.852 }' 00:11:08.852 23:28:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:08.852 23:28:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.852 23:28:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:11:08.852 23:28:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:08.852 23:28:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.852 [2024-09-30 23:28:48.399518] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:11:08.852 [2024-09-30 23:28:48.406687] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09a30 00:11:08.852 23:28:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:08.852 23:28:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:11:08.852 [2024-09-30 23:28:48.408919] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:11:09.790 23:28:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:09.790 23:28:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:09.790 23:28:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:09.790 23:28:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:09.790 23:28:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:09.790 23:28:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:09.790 23:28:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:09.790 23:28:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.790 23:28:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.790 23:28:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.790 23:28:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:09.790 "name": "raid_bdev1", 00:11:09.790 "uuid": "60024486-113b-4493-8bd6-8f4f4dc8a922", 00:11:09.790 "strip_size_kb": 0, 00:11:09.790 "state": "online", 00:11:09.790 "raid_level": "raid1", 00:11:09.790 "superblock": false, 00:11:09.790 "num_base_bdevs": 2, 00:11:09.790 "num_base_bdevs_discovered": 2, 00:11:09.790 "num_base_bdevs_operational": 2, 00:11:09.790 "process": { 00:11:09.790 "type": "rebuild", 00:11:09.790 "target": "spare", 00:11:09.790 "progress": { 00:11:09.790 "blocks": 20480, 00:11:09.790 "percent": 31 00:11:09.790 } 00:11:09.790 }, 00:11:09.790 "base_bdevs_list": [ 00:11:09.790 { 00:11:09.790 "name": "spare", 00:11:09.790 "uuid": "e28773de-ae2d-55e6-a9a3-b8248fa13e3d", 00:11:09.790 "is_configured": true, 00:11:09.790 "data_offset": 0, 00:11:09.790 "data_size": 65536 00:11:09.790 }, 00:11:09.790 { 00:11:09.790 "name": "BaseBdev2", 00:11:09.790 "uuid": "f9983d52-b3ed-575f-97ec-acf0bc54c587", 00:11:09.790 "is_configured": true, 00:11:09.790 "data_offset": 0, 00:11:09.790 "data_size": 65536 00:11:09.790 } 00:11:09.790 ] 00:11:09.790 }' 00:11:09.790 23:28:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:09.790 23:28:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:09.790 23:28:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:09.790 23:28:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:09.790 23:28:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:11:09.790 23:28:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.790 23:28:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.790 [2024-09-30 23:28:49.568438] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:11:09.790 [2024-09-30 23:28:49.617173] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:11:09.790 [2024-09-30 23:28:49.617269] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:09.790 [2024-09-30 23:28:49.617292] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:11:09.790 [2024-09-30 23:28:49.617301] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:11:09.790 23:28:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.790 23:28:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:11:09.790 23:28:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:09.790 23:28:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:09.790 23:28:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:09.790 23:28:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:09.790 23:28:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:11:09.790 23:28:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:09.790 23:28:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:09.790 23:28:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:09.790 23:28:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:09.790 23:28:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:09.790 23:28:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:09.790 23:28:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.790 23:28:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.050 23:28:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:10.050 23:28:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:10.050 "name": "raid_bdev1", 00:11:10.050 "uuid": "60024486-113b-4493-8bd6-8f4f4dc8a922", 00:11:10.050 "strip_size_kb": 0, 00:11:10.050 "state": "online", 00:11:10.050 "raid_level": "raid1", 00:11:10.050 "superblock": false, 00:11:10.050 "num_base_bdevs": 2, 00:11:10.050 "num_base_bdevs_discovered": 1, 00:11:10.050 "num_base_bdevs_operational": 1, 00:11:10.050 "base_bdevs_list": [ 00:11:10.050 { 00:11:10.050 "name": null, 00:11:10.050 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:10.050 "is_configured": false, 00:11:10.050 "data_offset": 0, 00:11:10.050 "data_size": 65536 00:11:10.050 }, 00:11:10.050 { 00:11:10.050 "name": "BaseBdev2", 00:11:10.050 "uuid": "f9983d52-b3ed-575f-97ec-acf0bc54c587", 00:11:10.050 "is_configured": true, 00:11:10.050 "data_offset": 0, 00:11:10.050 "data_size": 65536 00:11:10.050 } 00:11:10.050 ] 00:11:10.050 }' 00:11:10.050 23:28:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:10.050 23:28:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.310 23:28:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:11:10.310 23:28:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:10.310 23:28:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:11:10.310 23:28:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:11:10.310 23:28:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:10.310 23:28:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:10.310 23:28:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:10.310 23:28:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:10.310 23:28:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.310 23:28:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:10.310 23:28:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:10.310 "name": "raid_bdev1", 00:11:10.310 "uuid": "60024486-113b-4493-8bd6-8f4f4dc8a922", 00:11:10.310 "strip_size_kb": 0, 00:11:10.310 "state": "online", 00:11:10.310 "raid_level": "raid1", 00:11:10.310 "superblock": false, 00:11:10.310 "num_base_bdevs": 2, 00:11:10.310 "num_base_bdevs_discovered": 1, 00:11:10.310 "num_base_bdevs_operational": 1, 00:11:10.310 "base_bdevs_list": [ 00:11:10.310 { 00:11:10.310 "name": null, 00:11:10.310 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:10.310 "is_configured": false, 00:11:10.310 "data_offset": 0, 00:11:10.310 "data_size": 65536 00:11:10.310 }, 00:11:10.310 { 00:11:10.310 "name": "BaseBdev2", 00:11:10.310 "uuid": "f9983d52-b3ed-575f-97ec-acf0bc54c587", 00:11:10.310 "is_configured": true, 00:11:10.310 "data_offset": 0, 00:11:10.310 "data_size": 65536 00:11:10.310 } 00:11:10.310 ] 00:11:10.310 }' 00:11:10.310 23:28:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:10.310 23:28:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:11:10.569 23:28:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:10.569 23:28:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:11:10.569 23:28:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:11:10.569 23:28:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:10.569 23:28:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.569 [2024-09-30 23:28:50.207643] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:11:10.569 [2024-09-30 23:28:50.213905] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09b00 00:11:10.569 23:28:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:10.569 23:28:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:11:10.569 [2024-09-30 23:28:50.216027] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:11:11.507 23:28:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:11.507 23:28:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:11.507 23:28:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:11.507 23:28:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:11.507 23:28:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:11.507 23:28:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:11.507 23:28:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:11.507 23:28:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:11.507 23:28:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:11.507 23:28:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:11.507 23:28:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:11.507 "name": "raid_bdev1", 00:11:11.507 "uuid": "60024486-113b-4493-8bd6-8f4f4dc8a922", 00:11:11.507 "strip_size_kb": 0, 00:11:11.507 "state": "online", 00:11:11.507 "raid_level": "raid1", 00:11:11.507 "superblock": false, 00:11:11.507 "num_base_bdevs": 2, 00:11:11.507 "num_base_bdevs_discovered": 2, 00:11:11.507 "num_base_bdevs_operational": 2, 00:11:11.507 "process": { 00:11:11.507 "type": "rebuild", 00:11:11.507 "target": "spare", 00:11:11.507 "progress": { 00:11:11.507 "blocks": 20480, 00:11:11.507 "percent": 31 00:11:11.507 } 00:11:11.507 }, 00:11:11.507 "base_bdevs_list": [ 00:11:11.507 { 00:11:11.507 "name": "spare", 00:11:11.507 "uuid": "e28773de-ae2d-55e6-a9a3-b8248fa13e3d", 00:11:11.507 "is_configured": true, 00:11:11.507 "data_offset": 0, 00:11:11.507 "data_size": 65536 00:11:11.507 }, 00:11:11.507 { 00:11:11.507 "name": "BaseBdev2", 00:11:11.507 "uuid": "f9983d52-b3ed-575f-97ec-acf0bc54c587", 00:11:11.507 "is_configured": true, 00:11:11.507 "data_offset": 0, 00:11:11.507 "data_size": 65536 00:11:11.507 } 00:11:11.507 ] 00:11:11.507 }' 00:11:11.507 23:28:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:11.507 23:28:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:11.507 23:28:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:11.809 23:28:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:11.809 23:28:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:11:11.810 23:28:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:11:11.810 23:28:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:11:11.810 23:28:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:11:11.810 23:28:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=292 00:11:11.810 23:28:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:11:11.810 23:28:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:11.810 23:28:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:11.810 23:28:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:11.810 23:28:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:11.810 23:28:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:11.810 23:28:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:11.810 23:28:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:11.810 23:28:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:11.810 23:28:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:11.810 23:28:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:11.810 23:28:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:11.810 "name": "raid_bdev1", 00:11:11.810 "uuid": "60024486-113b-4493-8bd6-8f4f4dc8a922", 00:11:11.810 "strip_size_kb": 0, 00:11:11.810 "state": "online", 00:11:11.810 "raid_level": "raid1", 00:11:11.810 "superblock": false, 00:11:11.810 "num_base_bdevs": 2, 00:11:11.810 "num_base_bdevs_discovered": 2, 00:11:11.810 "num_base_bdevs_operational": 2, 00:11:11.810 "process": { 00:11:11.810 "type": "rebuild", 00:11:11.810 "target": "spare", 00:11:11.810 "progress": { 00:11:11.810 "blocks": 22528, 00:11:11.810 "percent": 34 00:11:11.810 } 00:11:11.810 }, 00:11:11.810 "base_bdevs_list": [ 00:11:11.810 { 00:11:11.810 "name": "spare", 00:11:11.810 "uuid": "e28773de-ae2d-55e6-a9a3-b8248fa13e3d", 00:11:11.810 "is_configured": true, 00:11:11.810 "data_offset": 0, 00:11:11.810 "data_size": 65536 00:11:11.810 }, 00:11:11.810 { 00:11:11.810 "name": "BaseBdev2", 00:11:11.810 "uuid": "f9983d52-b3ed-575f-97ec-acf0bc54c587", 00:11:11.810 "is_configured": true, 00:11:11.810 "data_offset": 0, 00:11:11.810 "data_size": 65536 00:11:11.810 } 00:11:11.810 ] 00:11:11.810 }' 00:11:11.810 23:28:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:11.810 23:28:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:11.810 23:28:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:11.810 23:28:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:11.810 23:28:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:11:12.748 23:28:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:11:12.748 23:28:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:12.748 23:28:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:12.748 23:28:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:12.748 23:28:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:12.748 23:28:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:12.748 23:28:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:12.748 23:28:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:12.748 23:28:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:12.748 23:28:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.748 23:28:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:12.748 23:28:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:12.748 "name": "raid_bdev1", 00:11:12.748 "uuid": "60024486-113b-4493-8bd6-8f4f4dc8a922", 00:11:12.748 "strip_size_kb": 0, 00:11:12.748 "state": "online", 00:11:12.748 "raid_level": "raid1", 00:11:12.748 "superblock": false, 00:11:12.748 "num_base_bdevs": 2, 00:11:12.748 "num_base_bdevs_discovered": 2, 00:11:12.748 "num_base_bdevs_operational": 2, 00:11:12.748 "process": { 00:11:12.748 "type": "rebuild", 00:11:12.748 "target": "spare", 00:11:12.748 "progress": { 00:11:12.748 "blocks": 45056, 00:11:12.748 "percent": 68 00:11:12.748 } 00:11:12.748 }, 00:11:12.748 "base_bdevs_list": [ 00:11:12.748 { 00:11:12.748 "name": "spare", 00:11:12.748 "uuid": "e28773de-ae2d-55e6-a9a3-b8248fa13e3d", 00:11:12.748 "is_configured": true, 00:11:12.748 "data_offset": 0, 00:11:12.748 "data_size": 65536 00:11:12.748 }, 00:11:12.748 { 00:11:12.748 "name": "BaseBdev2", 00:11:12.748 "uuid": "f9983d52-b3ed-575f-97ec-acf0bc54c587", 00:11:12.748 "is_configured": true, 00:11:12.748 "data_offset": 0, 00:11:12.748 "data_size": 65536 00:11:12.748 } 00:11:12.748 ] 00:11:12.748 }' 00:11:12.748 23:28:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:13.008 23:28:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:13.008 23:28:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:13.008 23:28:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:13.008 23:28:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:11:13.944 [2024-09-30 23:28:53.437135] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:11:13.944 [2024-09-30 23:28:53.437234] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:11:13.944 [2024-09-30 23:28:53.437286] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:13.944 23:28:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:11:13.944 23:28:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:13.944 23:28:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:13.944 23:28:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:13.944 23:28:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:13.944 23:28:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:13.944 23:28:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:13.944 23:28:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:13.944 23:28:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:13.944 23:28:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.944 23:28:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:13.944 23:28:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:13.944 "name": "raid_bdev1", 00:11:13.944 "uuid": "60024486-113b-4493-8bd6-8f4f4dc8a922", 00:11:13.944 "strip_size_kb": 0, 00:11:13.944 "state": "online", 00:11:13.944 "raid_level": "raid1", 00:11:13.944 "superblock": false, 00:11:13.944 "num_base_bdevs": 2, 00:11:13.944 "num_base_bdevs_discovered": 2, 00:11:13.944 "num_base_bdevs_operational": 2, 00:11:13.944 "base_bdevs_list": [ 00:11:13.944 { 00:11:13.944 "name": "spare", 00:11:13.944 "uuid": "e28773de-ae2d-55e6-a9a3-b8248fa13e3d", 00:11:13.944 "is_configured": true, 00:11:13.944 "data_offset": 0, 00:11:13.944 "data_size": 65536 00:11:13.944 }, 00:11:13.944 { 00:11:13.944 "name": "BaseBdev2", 00:11:13.944 "uuid": "f9983d52-b3ed-575f-97ec-acf0bc54c587", 00:11:13.944 "is_configured": true, 00:11:13.944 "data_offset": 0, 00:11:13.944 "data_size": 65536 00:11:13.944 } 00:11:13.944 ] 00:11:13.944 }' 00:11:13.944 23:28:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:13.944 23:28:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:11:13.944 23:28:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:14.204 23:28:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:11:14.204 23:28:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:11:14.204 23:28:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:11:14.204 23:28:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:14.204 23:28:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:11:14.204 23:28:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:11:14.204 23:28:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:14.204 23:28:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:14.204 23:28:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:14.204 23:28:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.204 23:28:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.204 23:28:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.204 23:28:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:14.204 "name": "raid_bdev1", 00:11:14.204 "uuid": "60024486-113b-4493-8bd6-8f4f4dc8a922", 00:11:14.204 "strip_size_kb": 0, 00:11:14.204 "state": "online", 00:11:14.204 "raid_level": "raid1", 00:11:14.204 "superblock": false, 00:11:14.204 "num_base_bdevs": 2, 00:11:14.204 "num_base_bdevs_discovered": 2, 00:11:14.204 "num_base_bdevs_operational": 2, 00:11:14.204 "base_bdevs_list": [ 00:11:14.204 { 00:11:14.204 "name": "spare", 00:11:14.204 "uuid": "e28773de-ae2d-55e6-a9a3-b8248fa13e3d", 00:11:14.204 "is_configured": true, 00:11:14.204 "data_offset": 0, 00:11:14.204 "data_size": 65536 00:11:14.204 }, 00:11:14.204 { 00:11:14.204 "name": "BaseBdev2", 00:11:14.204 "uuid": "f9983d52-b3ed-575f-97ec-acf0bc54c587", 00:11:14.204 "is_configured": true, 00:11:14.204 "data_offset": 0, 00:11:14.204 "data_size": 65536 00:11:14.204 } 00:11:14.204 ] 00:11:14.204 }' 00:11:14.204 23:28:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:14.204 23:28:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:11:14.204 23:28:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:14.204 23:28:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:11:14.204 23:28:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:11:14.204 23:28:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:14.204 23:28:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:14.204 23:28:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:14.204 23:28:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:14.204 23:28:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:14.204 23:28:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:14.204 23:28:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:14.204 23:28:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:14.204 23:28:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:14.204 23:28:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:14.204 23:28:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:14.204 23:28:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.204 23:28:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.204 23:28:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.204 23:28:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:14.204 "name": "raid_bdev1", 00:11:14.204 "uuid": "60024486-113b-4493-8bd6-8f4f4dc8a922", 00:11:14.204 "strip_size_kb": 0, 00:11:14.204 "state": "online", 00:11:14.204 "raid_level": "raid1", 00:11:14.204 "superblock": false, 00:11:14.204 "num_base_bdevs": 2, 00:11:14.204 "num_base_bdevs_discovered": 2, 00:11:14.204 "num_base_bdevs_operational": 2, 00:11:14.204 "base_bdevs_list": [ 00:11:14.204 { 00:11:14.204 "name": "spare", 00:11:14.204 "uuid": "e28773de-ae2d-55e6-a9a3-b8248fa13e3d", 00:11:14.204 "is_configured": true, 00:11:14.204 "data_offset": 0, 00:11:14.204 "data_size": 65536 00:11:14.204 }, 00:11:14.204 { 00:11:14.204 "name": "BaseBdev2", 00:11:14.204 "uuid": "f9983d52-b3ed-575f-97ec-acf0bc54c587", 00:11:14.204 "is_configured": true, 00:11:14.204 "data_offset": 0, 00:11:14.204 "data_size": 65536 00:11:14.204 } 00:11:14.204 ] 00:11:14.204 }' 00:11:14.204 23:28:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:14.204 23:28:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.773 23:28:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:14.773 23:28:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.773 23:28:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.773 [2024-09-30 23:28:54.355260] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:14.773 [2024-09-30 23:28:54.355339] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:14.773 [2024-09-30 23:28:54.355479] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:14.773 [2024-09-30 23:28:54.355571] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:14.773 [2024-09-30 23:28:54.355625] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:11:14.773 23:28:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.773 23:28:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:14.773 23:28:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.773 23:28:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.773 23:28:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:11:14.773 23:28:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.773 23:28:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:11:14.773 23:28:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:11:14.773 23:28:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:11:14.773 23:28:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:11:14.773 23:28:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:11:14.773 23:28:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:11:14.773 23:28:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:11:14.773 23:28:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:14.773 23:28:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:11:14.773 23:28:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:11:14.773 23:28:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:11:14.773 23:28:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:11:14.773 23:28:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:11:14.773 /dev/nbd0 00:11:15.032 23:28:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:11:15.032 23:28:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:11:15.032 23:28:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:11:15.032 23:28:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:11:15.032 23:28:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:11:15.032 23:28:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:11:15.032 23:28:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:11:15.032 23:28:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # break 00:11:15.032 23:28:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:11:15.032 23:28:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:11:15.032 23:28:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:15.032 1+0 records in 00:11:15.032 1+0 records out 00:11:15.032 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000479224 s, 8.5 MB/s 00:11:15.032 23:28:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:15.032 23:28:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:11:15.032 23:28:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:15.032 23:28:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:11:15.032 23:28:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:11:15.032 23:28:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:15.032 23:28:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:11:15.032 23:28:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:11:15.032 /dev/nbd1 00:11:15.292 23:28:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:11:15.292 23:28:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:11:15.292 23:28:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:11:15.292 23:28:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:11:15.292 23:28:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:11:15.292 23:28:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:11:15.292 23:28:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:11:15.292 23:28:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # break 00:11:15.292 23:28:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:11:15.292 23:28:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:11:15.292 23:28:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:15.292 1+0 records in 00:11:15.292 1+0 records out 00:11:15.292 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000280617 s, 14.6 MB/s 00:11:15.292 23:28:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:15.292 23:28:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:11:15.292 23:28:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:15.292 23:28:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:11:15.292 23:28:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:11:15.292 23:28:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:15.292 23:28:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:11:15.292 23:28:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:11:15.292 23:28:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:11:15.292 23:28:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:11:15.292 23:28:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:15.293 23:28:55 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:11:15.293 23:28:55 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:11:15.293 23:28:55 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:15.293 23:28:55 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:11:15.552 23:28:55 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:11:15.552 23:28:55 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:11:15.552 23:28:55 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:11:15.552 23:28:55 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:15.552 23:28:55 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:15.552 23:28:55 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:11:15.552 23:28:55 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:11:15.552 23:28:55 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:11:15.552 23:28:55 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:15.552 23:28:55 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:11:15.552 23:28:55 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:11:15.811 23:28:55 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:11:15.811 23:28:55 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:11:15.811 23:28:55 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:15.811 23:28:55 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:15.811 23:28:55 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:11:15.811 23:28:55 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:11:15.811 23:28:55 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:11:15.811 23:28:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:11:15.811 23:28:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 86051 00:11:15.811 23:28:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@950 -- # '[' -z 86051 ']' 00:11:15.811 23:28:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@954 -- # kill -0 86051 00:11:15.811 23:28:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@955 -- # uname 00:11:15.811 23:28:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:15.811 23:28:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 86051 00:11:15.811 23:28:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:15.811 23:28:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:15.811 23:28:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 86051' 00:11:15.811 killing process with pid 86051 00:11:15.811 23:28:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@969 -- # kill 86051 00:11:15.811 Received shutdown signal, test time was about 60.000000 seconds 00:11:15.811 00:11:15.811 Latency(us) 00:11:15.811 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:15.811 =================================================================================================================== 00:11:15.811 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:11:15.811 [2024-09-30 23:28:55.462088] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:15.811 23:28:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@974 -- # wait 86051 00:11:15.811 [2024-09-30 23:28:55.518001] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:16.071 23:28:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:11:16.071 00:11:16.071 real 0m14.477s 00:11:16.071 user 0m16.043s 00:11:16.071 sys 0m3.386s 00:11:16.071 23:28:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:16.071 23:28:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.071 ************************************ 00:11:16.071 END TEST raid_rebuild_test 00:11:16.071 ************************************ 00:11:16.330 23:28:55 bdev_raid -- bdev/bdev_raid.sh@979 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 2 true false true 00:11:16.330 23:28:55 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:11:16.330 23:28:55 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:16.330 23:28:55 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:16.330 ************************************ 00:11:16.330 START TEST raid_rebuild_test_sb 00:11:16.330 ************************************ 00:11:16.331 23:28:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 2 true false true 00:11:16.331 23:28:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:11:16.331 23:28:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:11:16.331 23:28:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:11:16.331 23:28:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:11:16.331 23:28:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:11:16.331 23:28:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:11:16.331 23:28:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:11:16.331 23:28:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:11:16.331 23:28:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:11:16.331 23:28:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:11:16.331 23:28:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:11:16.331 23:28:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:11:16.331 23:28:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:11:16.331 23:28:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:11:16.331 23:28:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:11:16.331 23:28:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:11:16.331 23:28:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:11:16.331 23:28:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:11:16.331 23:28:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:11:16.331 23:28:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:11:16.331 23:28:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:11:16.331 23:28:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:11:16.331 23:28:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:11:16.331 23:28:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:11:16.331 23:28:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=86469 00:11:16.331 23:28:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:11:16.331 23:28:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 86469 00:11:16.331 23:28:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@831 -- # '[' -z 86469 ']' 00:11:16.331 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:16.331 23:28:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:16.331 23:28:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:16.331 23:28:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:16.331 23:28:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:16.331 23:28:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:16.331 [2024-09-30 23:28:56.049381] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:11:16.331 I/O size of 3145728 is greater than zero copy threshold (65536). 00:11:16.331 Zero copy mechanism will not be used. 00:11:16.331 [2024-09-30 23:28:56.049600] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86469 ] 00:11:16.590 [2024-09-30 23:28:56.221201] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:16.590 [2024-09-30 23:28:56.289641] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:11:16.590 [2024-09-30 23:28:56.365204] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:16.590 [2024-09-30 23:28:56.365245] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:17.159 23:28:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:17.159 23:28:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@864 -- # return 0 00:11:17.159 23:28:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:11:17.159 23:28:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:17.159 23:28:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:17.159 23:28:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:17.159 BaseBdev1_malloc 00:11:17.159 23:28:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:17.160 23:28:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:11:17.160 23:28:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:17.160 23:28:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:17.160 [2024-09-30 23:28:56.895274] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:11:17.160 [2024-09-30 23:28:56.895345] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:17.160 [2024-09-30 23:28:56.895373] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:11:17.160 [2024-09-30 23:28:56.895388] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:17.160 [2024-09-30 23:28:56.897795] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:17.160 [2024-09-30 23:28:56.897832] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:17.160 BaseBdev1 00:11:17.160 23:28:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:17.160 23:28:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:11:17.160 23:28:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:17.160 23:28:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:17.160 23:28:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:17.160 BaseBdev2_malloc 00:11:17.160 23:28:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:17.160 23:28:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:11:17.160 23:28:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:17.160 23:28:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:17.160 [2024-09-30 23:28:56.945619] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:11:17.160 [2024-09-30 23:28:56.945670] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:17.160 [2024-09-30 23:28:56.945693] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:11:17.160 [2024-09-30 23:28:56.945702] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:17.160 [2024-09-30 23:28:56.948206] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:17.160 [2024-09-30 23:28:56.948243] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:17.160 BaseBdev2 00:11:17.160 23:28:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:17.160 23:28:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:11:17.160 23:28:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:17.160 23:28:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:17.160 spare_malloc 00:11:17.160 23:28:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:17.160 23:28:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:11:17.160 23:28:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:17.160 23:28:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:17.160 spare_delay 00:11:17.160 23:28:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:17.160 23:28:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:11:17.160 23:28:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:17.160 23:28:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:17.160 [2024-09-30 23:28:56.992015] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:11:17.160 [2024-09-30 23:28:56.992065] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:17.160 [2024-09-30 23:28:56.992085] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:11:17.160 [2024-09-30 23:28:56.992094] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:17.160 [2024-09-30 23:28:56.994462] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:17.160 [2024-09-30 23:28:56.994497] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:11:17.160 spare 00:11:17.160 23:28:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:17.160 23:28:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:11:17.160 23:28:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:17.160 23:28:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:17.160 [2024-09-30 23:28:57.004051] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:17.160 [2024-09-30 23:28:57.006145] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:17.160 [2024-09-30 23:28:57.006307] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:11:17.160 [2024-09-30 23:28:57.006320] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:17.160 [2024-09-30 23:28:57.006561] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:11:17.160 [2024-09-30 23:28:57.006706] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:11:17.160 [2024-09-30 23:28:57.006718] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:11:17.160 [2024-09-30 23:28:57.006838] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:17.160 23:28:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:17.160 23:28:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:11:17.160 23:28:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:17.160 23:28:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:17.160 23:28:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:17.160 23:28:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:17.160 23:28:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:17.160 23:28:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:17.160 23:28:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:17.160 23:28:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:17.419 23:28:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:17.419 23:28:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:17.419 23:28:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:17.419 23:28:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:17.419 23:28:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:17.419 23:28:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:17.419 23:28:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:17.419 "name": "raid_bdev1", 00:11:17.419 "uuid": "ed0de12f-1afa-4032-96b3-90c171240376", 00:11:17.419 "strip_size_kb": 0, 00:11:17.419 "state": "online", 00:11:17.419 "raid_level": "raid1", 00:11:17.419 "superblock": true, 00:11:17.419 "num_base_bdevs": 2, 00:11:17.419 "num_base_bdevs_discovered": 2, 00:11:17.419 "num_base_bdevs_operational": 2, 00:11:17.419 "base_bdevs_list": [ 00:11:17.419 { 00:11:17.419 "name": "BaseBdev1", 00:11:17.419 "uuid": "5770f38a-67b8-5e01-9891-fd10acd040c8", 00:11:17.419 "is_configured": true, 00:11:17.419 "data_offset": 2048, 00:11:17.419 "data_size": 63488 00:11:17.419 }, 00:11:17.419 { 00:11:17.419 "name": "BaseBdev2", 00:11:17.419 "uuid": "1319ea15-1150-52ac-9c79-a014f3df9b63", 00:11:17.419 "is_configured": true, 00:11:17.419 "data_offset": 2048, 00:11:17.419 "data_size": 63488 00:11:17.419 } 00:11:17.419 ] 00:11:17.419 }' 00:11:17.419 23:28:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:17.419 23:28:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:17.677 23:28:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:11:17.677 23:28:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:17.677 23:28:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:17.677 23:28:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:17.677 [2024-09-30 23:28:57.459587] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:17.677 23:28:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:17.677 23:28:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:11:17.677 23:28:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:17.677 23:28:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:17.677 23:28:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:17.677 23:28:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:11:17.677 23:28:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:17.936 23:28:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:11:17.936 23:28:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:11:17.936 23:28:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:11:17.936 23:28:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:11:17.936 23:28:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:11:17.936 23:28:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:11:17.936 23:28:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:11:17.936 23:28:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:11:17.936 23:28:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:11:17.936 23:28:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:11:17.936 23:28:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:11:17.936 23:28:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:11:17.936 23:28:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:11:17.936 23:28:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:11:17.936 [2024-09-30 23:28:57.735002] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:11:17.936 /dev/nbd0 00:11:17.936 23:28:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:11:17.936 23:28:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:11:17.936 23:28:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:11:17.936 23:28:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:11:17.936 23:28:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:11:17.936 23:28:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:11:17.936 23:28:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:11:17.936 23:28:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:11:17.936 23:28:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:11:17.936 23:28:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:11:17.936 23:28:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:17.936 1+0 records in 00:11:17.936 1+0 records out 00:11:17.936 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000595259 s, 6.9 MB/s 00:11:18.195 23:28:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:18.195 23:28:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:11:18.195 23:28:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:18.195 23:28:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:11:18.195 23:28:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:11:18.195 23:28:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:18.195 23:28:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:11:18.195 23:28:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:11:18.195 23:28:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:11:18.195 23:28:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:11:22.448 63488+0 records in 00:11:22.448 63488+0 records out 00:11:22.448 32505856 bytes (33 MB, 31 MiB) copied, 4.15247 s, 7.8 MB/s 00:11:22.448 23:29:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:11:22.448 23:29:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:11:22.448 23:29:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:11:22.448 23:29:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:11:22.448 23:29:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:11:22.448 23:29:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:22.448 23:29:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:11:22.448 [2024-09-30 23:29:02.174091] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:22.448 23:29:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:11:22.448 23:29:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:11:22.448 23:29:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:11:22.448 23:29:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:22.448 23:29:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:22.448 23:29:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:11:22.448 23:29:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:11:22.448 23:29:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:11:22.448 23:29:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:11:22.448 23:29:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:22.448 23:29:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:22.448 [2024-09-30 23:29:02.210124] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:22.448 23:29:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:22.448 23:29:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:11:22.448 23:29:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:22.448 23:29:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:22.448 23:29:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:22.448 23:29:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:22.448 23:29:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:11:22.448 23:29:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:22.448 23:29:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:22.448 23:29:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:22.448 23:29:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:22.448 23:29:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:22.448 23:29:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:22.448 23:29:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:22.448 23:29:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:22.448 23:29:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:22.448 23:29:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:22.448 "name": "raid_bdev1", 00:11:22.448 "uuid": "ed0de12f-1afa-4032-96b3-90c171240376", 00:11:22.448 "strip_size_kb": 0, 00:11:22.448 "state": "online", 00:11:22.448 "raid_level": "raid1", 00:11:22.448 "superblock": true, 00:11:22.448 "num_base_bdevs": 2, 00:11:22.448 "num_base_bdevs_discovered": 1, 00:11:22.448 "num_base_bdevs_operational": 1, 00:11:22.448 "base_bdevs_list": [ 00:11:22.448 { 00:11:22.448 "name": null, 00:11:22.448 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:22.448 "is_configured": false, 00:11:22.448 "data_offset": 0, 00:11:22.448 "data_size": 63488 00:11:22.448 }, 00:11:22.448 { 00:11:22.448 "name": "BaseBdev2", 00:11:22.448 "uuid": "1319ea15-1150-52ac-9c79-a014f3df9b63", 00:11:22.448 "is_configured": true, 00:11:22.448 "data_offset": 2048, 00:11:22.448 "data_size": 63488 00:11:22.448 } 00:11:22.448 ] 00:11:22.448 }' 00:11:22.448 23:29:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:22.448 23:29:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:23.017 23:29:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:11:23.017 23:29:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:23.017 23:29:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:23.017 [2024-09-30 23:29:02.645417] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:11:23.017 [2024-09-30 23:29:02.652666] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca31c0 00:11:23.017 23:29:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:23.017 23:29:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:11:23.017 [2024-09-30 23:29:02.654874] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:11:23.957 23:29:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:23.957 23:29:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:23.957 23:29:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:23.957 23:29:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:23.957 23:29:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:23.957 23:29:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:23.957 23:29:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:23.957 23:29:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:23.957 23:29:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:23.957 23:29:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:23.957 23:29:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:23.957 "name": "raid_bdev1", 00:11:23.957 "uuid": "ed0de12f-1afa-4032-96b3-90c171240376", 00:11:23.957 "strip_size_kb": 0, 00:11:23.957 "state": "online", 00:11:23.957 "raid_level": "raid1", 00:11:23.957 "superblock": true, 00:11:23.957 "num_base_bdevs": 2, 00:11:23.957 "num_base_bdevs_discovered": 2, 00:11:23.957 "num_base_bdevs_operational": 2, 00:11:23.957 "process": { 00:11:23.957 "type": "rebuild", 00:11:23.957 "target": "spare", 00:11:23.957 "progress": { 00:11:23.957 "blocks": 20480, 00:11:23.957 "percent": 32 00:11:23.957 } 00:11:23.957 }, 00:11:23.957 "base_bdevs_list": [ 00:11:23.957 { 00:11:23.957 "name": "spare", 00:11:23.957 "uuid": "5f20a11d-6354-5465-8086-443dff0d99f0", 00:11:23.957 "is_configured": true, 00:11:23.957 "data_offset": 2048, 00:11:23.957 "data_size": 63488 00:11:23.957 }, 00:11:23.957 { 00:11:23.957 "name": "BaseBdev2", 00:11:23.957 "uuid": "1319ea15-1150-52ac-9c79-a014f3df9b63", 00:11:23.957 "is_configured": true, 00:11:23.957 "data_offset": 2048, 00:11:23.957 "data_size": 63488 00:11:23.957 } 00:11:23.957 ] 00:11:23.957 }' 00:11:23.957 23:29:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:23.957 23:29:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:23.957 23:29:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:24.217 23:29:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:24.217 23:29:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:11:24.217 23:29:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.217 23:29:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:24.217 [2024-09-30 23:29:03.822554] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:11:24.217 [2024-09-30 23:29:03.863311] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:11:24.217 [2024-09-30 23:29:03.863412] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:24.217 [2024-09-30 23:29:03.863437] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:11:24.217 [2024-09-30 23:29:03.863456] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:11:24.217 23:29:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.217 23:29:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:11:24.217 23:29:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:24.217 23:29:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:24.217 23:29:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:24.217 23:29:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:24.217 23:29:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:11:24.217 23:29:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:24.217 23:29:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:24.217 23:29:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:24.217 23:29:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:24.217 23:29:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:24.217 23:29:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.217 23:29:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:24.217 23:29:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:24.217 23:29:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.217 23:29:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:24.217 "name": "raid_bdev1", 00:11:24.217 "uuid": "ed0de12f-1afa-4032-96b3-90c171240376", 00:11:24.217 "strip_size_kb": 0, 00:11:24.217 "state": "online", 00:11:24.217 "raid_level": "raid1", 00:11:24.217 "superblock": true, 00:11:24.217 "num_base_bdevs": 2, 00:11:24.217 "num_base_bdevs_discovered": 1, 00:11:24.217 "num_base_bdevs_operational": 1, 00:11:24.217 "base_bdevs_list": [ 00:11:24.217 { 00:11:24.217 "name": null, 00:11:24.217 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:24.217 "is_configured": false, 00:11:24.217 "data_offset": 0, 00:11:24.217 "data_size": 63488 00:11:24.217 }, 00:11:24.217 { 00:11:24.217 "name": "BaseBdev2", 00:11:24.217 "uuid": "1319ea15-1150-52ac-9c79-a014f3df9b63", 00:11:24.217 "is_configured": true, 00:11:24.217 "data_offset": 2048, 00:11:24.217 "data_size": 63488 00:11:24.217 } 00:11:24.217 ] 00:11:24.217 }' 00:11:24.217 23:29:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:24.217 23:29:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:24.476 23:29:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:11:24.476 23:29:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:24.476 23:29:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:11:24.476 23:29:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:11:24.476 23:29:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:24.476 23:29:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:24.476 23:29:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.476 23:29:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:24.476 23:29:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:24.476 23:29:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.736 23:29:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:24.736 "name": "raid_bdev1", 00:11:24.736 "uuid": "ed0de12f-1afa-4032-96b3-90c171240376", 00:11:24.736 "strip_size_kb": 0, 00:11:24.736 "state": "online", 00:11:24.736 "raid_level": "raid1", 00:11:24.736 "superblock": true, 00:11:24.736 "num_base_bdevs": 2, 00:11:24.736 "num_base_bdevs_discovered": 1, 00:11:24.736 "num_base_bdevs_operational": 1, 00:11:24.736 "base_bdevs_list": [ 00:11:24.736 { 00:11:24.736 "name": null, 00:11:24.736 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:24.736 "is_configured": false, 00:11:24.736 "data_offset": 0, 00:11:24.736 "data_size": 63488 00:11:24.736 }, 00:11:24.736 { 00:11:24.736 "name": "BaseBdev2", 00:11:24.736 "uuid": "1319ea15-1150-52ac-9c79-a014f3df9b63", 00:11:24.736 "is_configured": true, 00:11:24.736 "data_offset": 2048, 00:11:24.736 "data_size": 63488 00:11:24.736 } 00:11:24.736 ] 00:11:24.736 }' 00:11:24.736 23:29:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:24.736 23:29:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:11:24.736 23:29:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:24.736 23:29:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:11:24.736 23:29:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:11:24.736 23:29:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.736 23:29:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:24.736 [2024-09-30 23:29:04.466174] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:11:24.736 [2024-09-30 23:29:04.473221] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3290 00:11:24.736 23:29:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.736 23:29:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:11:24.736 [2024-09-30 23:29:04.475376] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:11:25.673 23:29:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:25.673 23:29:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:25.673 23:29:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:25.673 23:29:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:25.673 23:29:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:25.673 23:29:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:25.673 23:29:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:25.673 23:29:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.673 23:29:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:25.673 23:29:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.931 23:29:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:25.931 "name": "raid_bdev1", 00:11:25.931 "uuid": "ed0de12f-1afa-4032-96b3-90c171240376", 00:11:25.931 "strip_size_kb": 0, 00:11:25.931 "state": "online", 00:11:25.931 "raid_level": "raid1", 00:11:25.931 "superblock": true, 00:11:25.931 "num_base_bdevs": 2, 00:11:25.931 "num_base_bdevs_discovered": 2, 00:11:25.931 "num_base_bdevs_operational": 2, 00:11:25.931 "process": { 00:11:25.931 "type": "rebuild", 00:11:25.931 "target": "spare", 00:11:25.931 "progress": { 00:11:25.932 "blocks": 20480, 00:11:25.932 "percent": 32 00:11:25.932 } 00:11:25.932 }, 00:11:25.932 "base_bdevs_list": [ 00:11:25.932 { 00:11:25.932 "name": "spare", 00:11:25.932 "uuid": "5f20a11d-6354-5465-8086-443dff0d99f0", 00:11:25.932 "is_configured": true, 00:11:25.932 "data_offset": 2048, 00:11:25.932 "data_size": 63488 00:11:25.932 }, 00:11:25.932 { 00:11:25.932 "name": "BaseBdev2", 00:11:25.932 "uuid": "1319ea15-1150-52ac-9c79-a014f3df9b63", 00:11:25.932 "is_configured": true, 00:11:25.932 "data_offset": 2048, 00:11:25.932 "data_size": 63488 00:11:25.932 } 00:11:25.932 ] 00:11:25.932 }' 00:11:25.932 23:29:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:25.932 23:29:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:25.932 23:29:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:25.932 23:29:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:25.932 23:29:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:11:25.932 23:29:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:11:25.932 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:11:25.932 23:29:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:11:25.932 23:29:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:11:25.932 23:29:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:11:25.932 23:29:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=306 00:11:25.932 23:29:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:11:25.932 23:29:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:25.932 23:29:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:25.932 23:29:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:25.932 23:29:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:25.932 23:29:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:25.932 23:29:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:25.932 23:29:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.932 23:29:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:25.932 23:29:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:25.932 23:29:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.932 23:29:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:25.932 "name": "raid_bdev1", 00:11:25.932 "uuid": "ed0de12f-1afa-4032-96b3-90c171240376", 00:11:25.932 "strip_size_kb": 0, 00:11:25.932 "state": "online", 00:11:25.932 "raid_level": "raid1", 00:11:25.932 "superblock": true, 00:11:25.932 "num_base_bdevs": 2, 00:11:25.932 "num_base_bdevs_discovered": 2, 00:11:25.932 "num_base_bdevs_operational": 2, 00:11:25.932 "process": { 00:11:25.932 "type": "rebuild", 00:11:25.932 "target": "spare", 00:11:25.932 "progress": { 00:11:25.932 "blocks": 22528, 00:11:25.932 "percent": 35 00:11:25.932 } 00:11:25.932 }, 00:11:25.932 "base_bdevs_list": [ 00:11:25.932 { 00:11:25.932 "name": "spare", 00:11:25.932 "uuid": "5f20a11d-6354-5465-8086-443dff0d99f0", 00:11:25.932 "is_configured": true, 00:11:25.932 "data_offset": 2048, 00:11:25.932 "data_size": 63488 00:11:25.932 }, 00:11:25.932 { 00:11:25.932 "name": "BaseBdev2", 00:11:25.932 "uuid": "1319ea15-1150-52ac-9c79-a014f3df9b63", 00:11:25.932 "is_configured": true, 00:11:25.932 "data_offset": 2048, 00:11:25.932 "data_size": 63488 00:11:25.932 } 00:11:25.932 ] 00:11:25.932 }' 00:11:25.932 23:29:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:25.932 23:29:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:25.932 23:29:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:25.932 23:29:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:25.932 23:29:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:11:27.308 23:29:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:11:27.308 23:29:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:27.308 23:29:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:27.308 23:29:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:27.308 23:29:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:27.308 23:29:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:27.308 23:29:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:27.308 23:29:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:27.308 23:29:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:27.308 23:29:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:27.308 23:29:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:27.308 23:29:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:27.308 "name": "raid_bdev1", 00:11:27.308 "uuid": "ed0de12f-1afa-4032-96b3-90c171240376", 00:11:27.308 "strip_size_kb": 0, 00:11:27.308 "state": "online", 00:11:27.308 "raid_level": "raid1", 00:11:27.308 "superblock": true, 00:11:27.308 "num_base_bdevs": 2, 00:11:27.308 "num_base_bdevs_discovered": 2, 00:11:27.308 "num_base_bdevs_operational": 2, 00:11:27.308 "process": { 00:11:27.308 "type": "rebuild", 00:11:27.308 "target": "spare", 00:11:27.308 "progress": { 00:11:27.308 "blocks": 45056, 00:11:27.309 "percent": 70 00:11:27.309 } 00:11:27.309 }, 00:11:27.309 "base_bdevs_list": [ 00:11:27.309 { 00:11:27.309 "name": "spare", 00:11:27.309 "uuid": "5f20a11d-6354-5465-8086-443dff0d99f0", 00:11:27.309 "is_configured": true, 00:11:27.309 "data_offset": 2048, 00:11:27.309 "data_size": 63488 00:11:27.309 }, 00:11:27.309 { 00:11:27.309 "name": "BaseBdev2", 00:11:27.309 "uuid": "1319ea15-1150-52ac-9c79-a014f3df9b63", 00:11:27.309 "is_configured": true, 00:11:27.309 "data_offset": 2048, 00:11:27.309 "data_size": 63488 00:11:27.309 } 00:11:27.309 ] 00:11:27.309 }' 00:11:27.309 23:29:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:27.309 23:29:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:27.309 23:29:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:27.309 23:29:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:27.309 23:29:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:11:27.876 [2024-09-30 23:29:07.595597] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:11:27.876 [2024-09-30 23:29:07.595748] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:11:27.876 [2024-09-30 23:29:07.595851] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:28.136 23:29:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:11:28.136 23:29:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:28.136 23:29:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:28.136 23:29:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:28.136 23:29:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:28.136 23:29:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:28.136 23:29:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:28.136 23:29:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.136 23:29:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:28.136 23:29:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:28.136 23:29:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.136 23:29:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:28.136 "name": "raid_bdev1", 00:11:28.136 "uuid": "ed0de12f-1afa-4032-96b3-90c171240376", 00:11:28.136 "strip_size_kb": 0, 00:11:28.136 "state": "online", 00:11:28.136 "raid_level": "raid1", 00:11:28.136 "superblock": true, 00:11:28.136 "num_base_bdevs": 2, 00:11:28.136 "num_base_bdevs_discovered": 2, 00:11:28.136 "num_base_bdevs_operational": 2, 00:11:28.136 "base_bdevs_list": [ 00:11:28.136 { 00:11:28.136 "name": "spare", 00:11:28.136 "uuid": "5f20a11d-6354-5465-8086-443dff0d99f0", 00:11:28.136 "is_configured": true, 00:11:28.136 "data_offset": 2048, 00:11:28.136 "data_size": 63488 00:11:28.136 }, 00:11:28.136 { 00:11:28.136 "name": "BaseBdev2", 00:11:28.136 "uuid": "1319ea15-1150-52ac-9c79-a014f3df9b63", 00:11:28.136 "is_configured": true, 00:11:28.136 "data_offset": 2048, 00:11:28.136 "data_size": 63488 00:11:28.136 } 00:11:28.136 ] 00:11:28.136 }' 00:11:28.136 23:29:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:28.396 23:29:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:11:28.396 23:29:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:28.396 23:29:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:11:28.396 23:29:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:11:28.396 23:29:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:11:28.396 23:29:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:28.396 23:29:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:11:28.396 23:29:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:11:28.396 23:29:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:28.396 23:29:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:28.396 23:29:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:28.396 23:29:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.396 23:29:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:28.396 23:29:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.396 23:29:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:28.396 "name": "raid_bdev1", 00:11:28.396 "uuid": "ed0de12f-1afa-4032-96b3-90c171240376", 00:11:28.396 "strip_size_kb": 0, 00:11:28.396 "state": "online", 00:11:28.396 "raid_level": "raid1", 00:11:28.396 "superblock": true, 00:11:28.396 "num_base_bdevs": 2, 00:11:28.396 "num_base_bdevs_discovered": 2, 00:11:28.396 "num_base_bdevs_operational": 2, 00:11:28.396 "base_bdevs_list": [ 00:11:28.396 { 00:11:28.396 "name": "spare", 00:11:28.396 "uuid": "5f20a11d-6354-5465-8086-443dff0d99f0", 00:11:28.396 "is_configured": true, 00:11:28.396 "data_offset": 2048, 00:11:28.396 "data_size": 63488 00:11:28.396 }, 00:11:28.396 { 00:11:28.396 "name": "BaseBdev2", 00:11:28.396 "uuid": "1319ea15-1150-52ac-9c79-a014f3df9b63", 00:11:28.396 "is_configured": true, 00:11:28.396 "data_offset": 2048, 00:11:28.396 "data_size": 63488 00:11:28.396 } 00:11:28.396 ] 00:11:28.396 }' 00:11:28.396 23:29:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:28.396 23:29:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:11:28.396 23:29:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:28.396 23:29:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:11:28.396 23:29:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:11:28.396 23:29:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:28.396 23:29:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:28.396 23:29:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:28.396 23:29:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:28.396 23:29:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:28.396 23:29:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:28.396 23:29:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:28.397 23:29:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:28.397 23:29:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:28.397 23:29:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:28.397 23:29:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:28.397 23:29:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.397 23:29:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:28.397 23:29:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.397 23:29:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:28.397 "name": "raid_bdev1", 00:11:28.397 "uuid": "ed0de12f-1afa-4032-96b3-90c171240376", 00:11:28.397 "strip_size_kb": 0, 00:11:28.397 "state": "online", 00:11:28.397 "raid_level": "raid1", 00:11:28.397 "superblock": true, 00:11:28.397 "num_base_bdevs": 2, 00:11:28.397 "num_base_bdevs_discovered": 2, 00:11:28.397 "num_base_bdevs_operational": 2, 00:11:28.397 "base_bdevs_list": [ 00:11:28.397 { 00:11:28.397 "name": "spare", 00:11:28.397 "uuid": "5f20a11d-6354-5465-8086-443dff0d99f0", 00:11:28.397 "is_configured": true, 00:11:28.397 "data_offset": 2048, 00:11:28.397 "data_size": 63488 00:11:28.397 }, 00:11:28.397 { 00:11:28.397 "name": "BaseBdev2", 00:11:28.397 "uuid": "1319ea15-1150-52ac-9c79-a014f3df9b63", 00:11:28.397 "is_configured": true, 00:11:28.397 "data_offset": 2048, 00:11:28.397 "data_size": 63488 00:11:28.397 } 00:11:28.397 ] 00:11:28.397 }' 00:11:28.397 23:29:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:28.397 23:29:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:28.966 23:29:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:28.966 23:29:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.966 23:29:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:28.966 [2024-09-30 23:29:08.625192] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:28.966 [2024-09-30 23:29:08.625266] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:28.966 [2024-09-30 23:29:08.625373] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:28.966 [2024-09-30 23:29:08.625475] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:28.966 [2024-09-30 23:29:08.625553] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:11:28.966 23:29:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.966 23:29:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:28.966 23:29:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.966 23:29:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:28.966 23:29:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:11:28.966 23:29:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.966 23:29:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:11:28.966 23:29:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:11:28.966 23:29:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:11:28.966 23:29:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:11:28.966 23:29:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:11:28.966 23:29:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:11:28.966 23:29:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:11:28.966 23:29:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:28.966 23:29:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:11:28.966 23:29:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:11:28.966 23:29:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:11:28.966 23:29:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:11:28.966 23:29:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:11:29.225 /dev/nbd0 00:11:29.226 23:29:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:11:29.226 23:29:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:11:29.226 23:29:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:11:29.226 23:29:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:11:29.226 23:29:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:11:29.226 23:29:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:11:29.226 23:29:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:11:29.226 23:29:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:11:29.226 23:29:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:11:29.226 23:29:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:11:29.226 23:29:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:29.226 1+0 records in 00:11:29.226 1+0 records out 00:11:29.226 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000285413 s, 14.4 MB/s 00:11:29.226 23:29:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:29.226 23:29:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:11:29.226 23:29:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:29.226 23:29:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:11:29.226 23:29:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:11:29.226 23:29:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:29.226 23:29:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:11:29.226 23:29:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:11:29.485 /dev/nbd1 00:11:29.485 23:29:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:11:29.485 23:29:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:11:29.485 23:29:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:11:29.485 23:29:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:11:29.485 23:29:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:11:29.485 23:29:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:11:29.485 23:29:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:11:29.485 23:29:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:11:29.485 23:29:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:11:29.485 23:29:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:11:29.485 23:29:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:29.485 1+0 records in 00:11:29.485 1+0 records out 00:11:29.485 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000410161 s, 10.0 MB/s 00:11:29.485 23:29:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:29.485 23:29:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:11:29.485 23:29:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:29.486 23:29:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:11:29.486 23:29:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:11:29.486 23:29:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:29.486 23:29:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:11:29.486 23:29:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:11:29.486 23:29:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:11:29.486 23:29:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:11:29.486 23:29:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:29.486 23:29:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:11:29.486 23:29:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:11:29.486 23:29:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:29.486 23:29:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:11:29.745 23:29:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:11:29.745 23:29:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:11:29.745 23:29:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:11:29.745 23:29:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:29.745 23:29:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:29.745 23:29:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:11:29.745 23:29:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:11:29.745 23:29:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:11:29.745 23:29:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:29.745 23:29:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:11:30.004 23:29:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:11:30.004 23:29:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:11:30.004 23:29:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:11:30.004 23:29:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:30.004 23:29:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:30.004 23:29:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:11:30.004 23:29:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:11:30.004 23:29:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:11:30.004 23:29:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:11:30.004 23:29:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:11:30.004 23:29:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:30.004 23:29:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:30.004 23:29:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:30.004 23:29:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:11:30.004 23:29:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:30.004 23:29:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:30.004 [2024-09-30 23:29:09.753412] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:11:30.004 [2024-09-30 23:29:09.753474] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:30.004 [2024-09-30 23:29:09.753495] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:11:30.004 [2024-09-30 23:29:09.753511] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:30.004 [2024-09-30 23:29:09.756061] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:30.004 [2024-09-30 23:29:09.756149] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:11:30.004 [2024-09-30 23:29:09.756247] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:11:30.004 [2024-09-30 23:29:09.756309] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:11:30.004 [2024-09-30 23:29:09.756446] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:30.004 spare 00:11:30.004 23:29:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:30.004 23:29:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:11:30.004 23:29:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:30.004 23:29:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:30.004 [2024-09-30 23:29:09.856355] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006600 00:11:30.004 [2024-09-30 23:29:09.856382] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:30.004 [2024-09-30 23:29:09.856680] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1940 00:11:30.004 [2024-09-30 23:29:09.856855] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006600 00:11:30.004 [2024-09-30 23:29:09.856890] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006600 00:11:30.004 [2024-09-30 23:29:09.857023] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:30.262 23:29:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:30.262 23:29:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:11:30.262 23:29:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:30.262 23:29:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:30.262 23:29:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:30.262 23:29:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:30.262 23:29:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:30.262 23:29:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:30.262 23:29:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:30.262 23:29:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:30.263 23:29:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:30.263 23:29:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:30.263 23:29:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:30.263 23:29:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:30.263 23:29:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:30.263 23:29:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:30.263 23:29:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:30.263 "name": "raid_bdev1", 00:11:30.263 "uuid": "ed0de12f-1afa-4032-96b3-90c171240376", 00:11:30.263 "strip_size_kb": 0, 00:11:30.263 "state": "online", 00:11:30.263 "raid_level": "raid1", 00:11:30.263 "superblock": true, 00:11:30.263 "num_base_bdevs": 2, 00:11:30.263 "num_base_bdevs_discovered": 2, 00:11:30.263 "num_base_bdevs_operational": 2, 00:11:30.263 "base_bdevs_list": [ 00:11:30.263 { 00:11:30.263 "name": "spare", 00:11:30.263 "uuid": "5f20a11d-6354-5465-8086-443dff0d99f0", 00:11:30.263 "is_configured": true, 00:11:30.263 "data_offset": 2048, 00:11:30.263 "data_size": 63488 00:11:30.263 }, 00:11:30.263 { 00:11:30.263 "name": "BaseBdev2", 00:11:30.263 "uuid": "1319ea15-1150-52ac-9c79-a014f3df9b63", 00:11:30.263 "is_configured": true, 00:11:30.263 "data_offset": 2048, 00:11:30.263 "data_size": 63488 00:11:30.263 } 00:11:30.263 ] 00:11:30.263 }' 00:11:30.263 23:29:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:30.263 23:29:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:30.521 23:29:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:11:30.521 23:29:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:30.521 23:29:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:11:30.521 23:29:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:11:30.521 23:29:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:30.521 23:29:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:30.521 23:29:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:30.521 23:29:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:30.521 23:29:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:30.521 23:29:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:30.521 23:29:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:30.521 "name": "raid_bdev1", 00:11:30.522 "uuid": "ed0de12f-1afa-4032-96b3-90c171240376", 00:11:30.522 "strip_size_kb": 0, 00:11:30.522 "state": "online", 00:11:30.522 "raid_level": "raid1", 00:11:30.522 "superblock": true, 00:11:30.522 "num_base_bdevs": 2, 00:11:30.522 "num_base_bdevs_discovered": 2, 00:11:30.522 "num_base_bdevs_operational": 2, 00:11:30.522 "base_bdevs_list": [ 00:11:30.522 { 00:11:30.522 "name": "spare", 00:11:30.522 "uuid": "5f20a11d-6354-5465-8086-443dff0d99f0", 00:11:30.522 "is_configured": true, 00:11:30.522 "data_offset": 2048, 00:11:30.522 "data_size": 63488 00:11:30.522 }, 00:11:30.522 { 00:11:30.522 "name": "BaseBdev2", 00:11:30.522 "uuid": "1319ea15-1150-52ac-9c79-a014f3df9b63", 00:11:30.522 "is_configured": true, 00:11:30.522 "data_offset": 2048, 00:11:30.522 "data_size": 63488 00:11:30.522 } 00:11:30.522 ] 00:11:30.522 }' 00:11:30.522 23:29:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:30.781 23:29:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:11:30.781 23:29:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:30.781 23:29:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:11:30.781 23:29:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:30.781 23:29:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:11:30.781 23:29:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:30.781 23:29:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:30.781 23:29:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:30.781 23:29:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:11:30.781 23:29:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:11:30.781 23:29:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:30.781 23:29:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:30.782 [2024-09-30 23:29:10.492147] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:11:30.782 23:29:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:30.782 23:29:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:11:30.782 23:29:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:30.782 23:29:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:30.782 23:29:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:30.782 23:29:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:30.782 23:29:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:11:30.782 23:29:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:30.782 23:29:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:30.782 23:29:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:30.782 23:29:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:30.782 23:29:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:30.782 23:29:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:30.782 23:29:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:30.782 23:29:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:30.782 23:29:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:30.782 23:29:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:30.782 "name": "raid_bdev1", 00:11:30.782 "uuid": "ed0de12f-1afa-4032-96b3-90c171240376", 00:11:30.782 "strip_size_kb": 0, 00:11:30.782 "state": "online", 00:11:30.782 "raid_level": "raid1", 00:11:30.782 "superblock": true, 00:11:30.782 "num_base_bdevs": 2, 00:11:30.782 "num_base_bdevs_discovered": 1, 00:11:30.782 "num_base_bdevs_operational": 1, 00:11:30.782 "base_bdevs_list": [ 00:11:30.782 { 00:11:30.782 "name": null, 00:11:30.782 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:30.782 "is_configured": false, 00:11:30.782 "data_offset": 0, 00:11:30.782 "data_size": 63488 00:11:30.782 }, 00:11:30.782 { 00:11:30.782 "name": "BaseBdev2", 00:11:30.782 "uuid": "1319ea15-1150-52ac-9c79-a014f3df9b63", 00:11:30.782 "is_configured": true, 00:11:30.782 "data_offset": 2048, 00:11:30.782 "data_size": 63488 00:11:30.782 } 00:11:30.782 ] 00:11:30.782 }' 00:11:30.782 23:29:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:30.782 23:29:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:31.350 23:29:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:11:31.350 23:29:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:31.350 23:29:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:31.350 [2024-09-30 23:29:10.987374] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:11:31.350 [2024-09-30 23:29:10.987603] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:11:31.350 [2024-09-30 23:29:10.987664] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:11:31.350 [2024-09-30 23:29:10.987726] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:11:31.350 [2024-09-30 23:29:10.994790] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1a10 00:11:31.350 23:29:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:31.350 23:29:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:11:31.350 [2024-09-30 23:29:10.997064] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:11:32.288 23:29:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:32.288 23:29:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:32.288 23:29:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:32.288 23:29:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:32.288 23:29:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:32.288 23:29:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:32.288 23:29:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:32.288 23:29:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:32.288 23:29:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:32.288 23:29:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:32.288 23:29:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:32.288 "name": "raid_bdev1", 00:11:32.288 "uuid": "ed0de12f-1afa-4032-96b3-90c171240376", 00:11:32.288 "strip_size_kb": 0, 00:11:32.288 "state": "online", 00:11:32.288 "raid_level": "raid1", 00:11:32.288 "superblock": true, 00:11:32.288 "num_base_bdevs": 2, 00:11:32.288 "num_base_bdevs_discovered": 2, 00:11:32.288 "num_base_bdevs_operational": 2, 00:11:32.288 "process": { 00:11:32.288 "type": "rebuild", 00:11:32.288 "target": "spare", 00:11:32.288 "progress": { 00:11:32.288 "blocks": 20480, 00:11:32.288 "percent": 32 00:11:32.288 } 00:11:32.288 }, 00:11:32.288 "base_bdevs_list": [ 00:11:32.288 { 00:11:32.288 "name": "spare", 00:11:32.288 "uuid": "5f20a11d-6354-5465-8086-443dff0d99f0", 00:11:32.288 "is_configured": true, 00:11:32.288 "data_offset": 2048, 00:11:32.288 "data_size": 63488 00:11:32.288 }, 00:11:32.288 { 00:11:32.288 "name": "BaseBdev2", 00:11:32.288 "uuid": "1319ea15-1150-52ac-9c79-a014f3df9b63", 00:11:32.288 "is_configured": true, 00:11:32.288 "data_offset": 2048, 00:11:32.288 "data_size": 63488 00:11:32.288 } 00:11:32.288 ] 00:11:32.288 }' 00:11:32.288 23:29:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:32.288 23:29:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:32.288 23:29:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:32.288 23:29:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:32.288 23:29:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:11:32.288 23:29:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:32.288 23:29:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:32.548 [2024-09-30 23:29:12.141034] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:11:32.548 [2024-09-30 23:29:12.204535] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:11:32.548 [2024-09-30 23:29:12.204589] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:32.548 [2024-09-30 23:29:12.204609] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:11:32.548 [2024-09-30 23:29:12.204617] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:11:32.548 23:29:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:32.548 23:29:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:11:32.548 23:29:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:32.548 23:29:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:32.548 23:29:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:32.548 23:29:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:32.548 23:29:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:11:32.548 23:29:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:32.548 23:29:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:32.548 23:29:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:32.548 23:29:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:32.548 23:29:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:32.548 23:29:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:32.548 23:29:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:32.548 23:29:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:32.548 23:29:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:32.548 23:29:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:32.548 "name": "raid_bdev1", 00:11:32.548 "uuid": "ed0de12f-1afa-4032-96b3-90c171240376", 00:11:32.548 "strip_size_kb": 0, 00:11:32.548 "state": "online", 00:11:32.548 "raid_level": "raid1", 00:11:32.548 "superblock": true, 00:11:32.548 "num_base_bdevs": 2, 00:11:32.548 "num_base_bdevs_discovered": 1, 00:11:32.548 "num_base_bdevs_operational": 1, 00:11:32.548 "base_bdevs_list": [ 00:11:32.548 { 00:11:32.548 "name": null, 00:11:32.548 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:32.548 "is_configured": false, 00:11:32.548 "data_offset": 0, 00:11:32.548 "data_size": 63488 00:11:32.548 }, 00:11:32.548 { 00:11:32.548 "name": "BaseBdev2", 00:11:32.548 "uuid": "1319ea15-1150-52ac-9c79-a014f3df9b63", 00:11:32.548 "is_configured": true, 00:11:32.548 "data_offset": 2048, 00:11:32.548 "data_size": 63488 00:11:32.548 } 00:11:32.548 ] 00:11:32.548 }' 00:11:32.548 23:29:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:32.548 23:29:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:33.116 23:29:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:11:33.116 23:29:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:33.116 23:29:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:33.116 [2024-09-30 23:29:12.667038] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:11:33.116 [2024-09-30 23:29:12.667179] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:33.116 [2024-09-30 23:29:12.667222] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:11:33.116 [2024-09-30 23:29:12.667277] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:33.116 [2024-09-30 23:29:12.667810] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:33.116 [2024-09-30 23:29:12.667885] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:11:33.116 [2024-09-30 23:29:12.668010] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:11:33.116 [2024-09-30 23:29:12.668050] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:11:33.116 [2024-09-30 23:29:12.668108] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:11:33.116 [2024-09-30 23:29:12.668193] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:11:33.116 [2024-09-30 23:29:12.674881] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1ae0 00:11:33.116 spare 00:11:33.117 23:29:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:33.117 23:29:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:11:33.117 [2024-09-30 23:29:12.677128] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:11:34.055 23:29:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:34.055 23:29:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:34.055 23:29:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:34.055 23:29:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:34.055 23:29:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:34.055 23:29:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:34.055 23:29:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:34.055 23:29:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.055 23:29:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:34.055 23:29:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.055 23:29:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:34.055 "name": "raid_bdev1", 00:11:34.055 "uuid": "ed0de12f-1afa-4032-96b3-90c171240376", 00:11:34.055 "strip_size_kb": 0, 00:11:34.055 "state": "online", 00:11:34.055 "raid_level": "raid1", 00:11:34.055 "superblock": true, 00:11:34.055 "num_base_bdevs": 2, 00:11:34.055 "num_base_bdevs_discovered": 2, 00:11:34.055 "num_base_bdevs_operational": 2, 00:11:34.055 "process": { 00:11:34.055 "type": "rebuild", 00:11:34.055 "target": "spare", 00:11:34.055 "progress": { 00:11:34.055 "blocks": 20480, 00:11:34.055 "percent": 32 00:11:34.055 } 00:11:34.055 }, 00:11:34.055 "base_bdevs_list": [ 00:11:34.055 { 00:11:34.055 "name": "spare", 00:11:34.055 "uuid": "5f20a11d-6354-5465-8086-443dff0d99f0", 00:11:34.055 "is_configured": true, 00:11:34.055 "data_offset": 2048, 00:11:34.055 "data_size": 63488 00:11:34.055 }, 00:11:34.055 { 00:11:34.055 "name": "BaseBdev2", 00:11:34.055 "uuid": "1319ea15-1150-52ac-9c79-a014f3df9b63", 00:11:34.055 "is_configured": true, 00:11:34.055 "data_offset": 2048, 00:11:34.055 "data_size": 63488 00:11:34.055 } 00:11:34.055 ] 00:11:34.055 }' 00:11:34.055 23:29:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:34.055 23:29:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:34.055 23:29:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:34.055 23:29:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:34.055 23:29:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:11:34.055 23:29:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.055 23:29:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:34.055 [2024-09-30 23:29:13.821181] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:11:34.055 [2024-09-30 23:29:13.884853] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:11:34.055 [2024-09-30 23:29:13.884934] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:34.055 [2024-09-30 23:29:13.884951] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:11:34.055 [2024-09-30 23:29:13.884960] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:11:34.055 23:29:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.055 23:29:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:11:34.056 23:29:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:34.056 23:29:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:34.056 23:29:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:34.056 23:29:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:34.056 23:29:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:11:34.056 23:29:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:34.056 23:29:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:34.056 23:29:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:34.056 23:29:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:34.056 23:29:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:34.056 23:29:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:34.056 23:29:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.056 23:29:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:34.324 23:29:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.324 23:29:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:34.324 "name": "raid_bdev1", 00:11:34.324 "uuid": "ed0de12f-1afa-4032-96b3-90c171240376", 00:11:34.324 "strip_size_kb": 0, 00:11:34.324 "state": "online", 00:11:34.324 "raid_level": "raid1", 00:11:34.324 "superblock": true, 00:11:34.324 "num_base_bdevs": 2, 00:11:34.324 "num_base_bdevs_discovered": 1, 00:11:34.324 "num_base_bdevs_operational": 1, 00:11:34.324 "base_bdevs_list": [ 00:11:34.324 { 00:11:34.324 "name": null, 00:11:34.324 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:34.324 "is_configured": false, 00:11:34.324 "data_offset": 0, 00:11:34.324 "data_size": 63488 00:11:34.324 }, 00:11:34.324 { 00:11:34.324 "name": "BaseBdev2", 00:11:34.324 "uuid": "1319ea15-1150-52ac-9c79-a014f3df9b63", 00:11:34.324 "is_configured": true, 00:11:34.324 "data_offset": 2048, 00:11:34.324 "data_size": 63488 00:11:34.324 } 00:11:34.324 ] 00:11:34.324 }' 00:11:34.324 23:29:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:34.324 23:29:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:34.601 23:29:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:11:34.601 23:29:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:34.601 23:29:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:11:34.601 23:29:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:11:34.601 23:29:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:34.601 23:29:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:34.601 23:29:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.601 23:29:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:34.601 23:29:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:34.601 23:29:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.601 23:29:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:34.601 "name": "raid_bdev1", 00:11:34.601 "uuid": "ed0de12f-1afa-4032-96b3-90c171240376", 00:11:34.601 "strip_size_kb": 0, 00:11:34.601 "state": "online", 00:11:34.601 "raid_level": "raid1", 00:11:34.601 "superblock": true, 00:11:34.601 "num_base_bdevs": 2, 00:11:34.601 "num_base_bdevs_discovered": 1, 00:11:34.601 "num_base_bdevs_operational": 1, 00:11:34.601 "base_bdevs_list": [ 00:11:34.601 { 00:11:34.601 "name": null, 00:11:34.601 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:34.601 "is_configured": false, 00:11:34.601 "data_offset": 0, 00:11:34.601 "data_size": 63488 00:11:34.601 }, 00:11:34.601 { 00:11:34.601 "name": "BaseBdev2", 00:11:34.601 "uuid": "1319ea15-1150-52ac-9c79-a014f3df9b63", 00:11:34.601 "is_configured": true, 00:11:34.601 "data_offset": 2048, 00:11:34.601 "data_size": 63488 00:11:34.601 } 00:11:34.601 ] 00:11:34.601 }' 00:11:34.601 23:29:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:34.875 23:29:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:11:34.875 23:29:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:34.875 23:29:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:11:34.875 23:29:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:11:34.875 23:29:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.875 23:29:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:34.875 23:29:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.875 23:29:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:11:34.875 23:29:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.875 23:29:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:34.875 [2024-09-30 23:29:14.543248] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:11:34.875 [2024-09-30 23:29:14.543311] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:34.875 [2024-09-30 23:29:14.543331] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:11:34.875 [2024-09-30 23:29:14.543344] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:34.875 [2024-09-30 23:29:14.543797] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:34.875 [2024-09-30 23:29:14.543819] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:34.875 [2024-09-30 23:29:14.543914] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:11:34.875 [2024-09-30 23:29:14.543937] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:11:34.875 [2024-09-30 23:29:14.543955] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:11:34.875 [2024-09-30 23:29:14.543980] bdev_raid.c:3884:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:11:34.875 BaseBdev1 00:11:34.875 23:29:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.875 23:29:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:11:35.854 23:29:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:11:35.854 23:29:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:35.854 23:29:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:35.854 23:29:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:35.854 23:29:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:35.854 23:29:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:11:35.854 23:29:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:35.854 23:29:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:35.854 23:29:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:35.854 23:29:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:35.854 23:29:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:35.854 23:29:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:35.854 23:29:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.854 23:29:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:35.854 23:29:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.854 23:29:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:35.854 "name": "raid_bdev1", 00:11:35.854 "uuid": "ed0de12f-1afa-4032-96b3-90c171240376", 00:11:35.854 "strip_size_kb": 0, 00:11:35.854 "state": "online", 00:11:35.854 "raid_level": "raid1", 00:11:35.854 "superblock": true, 00:11:35.854 "num_base_bdevs": 2, 00:11:35.854 "num_base_bdevs_discovered": 1, 00:11:35.854 "num_base_bdevs_operational": 1, 00:11:35.854 "base_bdevs_list": [ 00:11:35.854 { 00:11:35.854 "name": null, 00:11:35.854 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:35.854 "is_configured": false, 00:11:35.854 "data_offset": 0, 00:11:35.854 "data_size": 63488 00:11:35.854 }, 00:11:35.854 { 00:11:35.854 "name": "BaseBdev2", 00:11:35.854 "uuid": "1319ea15-1150-52ac-9c79-a014f3df9b63", 00:11:35.854 "is_configured": true, 00:11:35.854 "data_offset": 2048, 00:11:35.854 "data_size": 63488 00:11:35.854 } 00:11:35.854 ] 00:11:35.854 }' 00:11:35.854 23:29:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:35.854 23:29:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:36.422 23:29:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:11:36.422 23:29:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:36.422 23:29:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:11:36.423 23:29:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:11:36.423 23:29:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:36.423 23:29:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:36.423 23:29:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.423 23:29:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:36.423 23:29:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:36.423 23:29:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:36.423 23:29:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:36.423 "name": "raid_bdev1", 00:11:36.423 "uuid": "ed0de12f-1afa-4032-96b3-90c171240376", 00:11:36.423 "strip_size_kb": 0, 00:11:36.423 "state": "online", 00:11:36.423 "raid_level": "raid1", 00:11:36.423 "superblock": true, 00:11:36.423 "num_base_bdevs": 2, 00:11:36.423 "num_base_bdevs_discovered": 1, 00:11:36.423 "num_base_bdevs_operational": 1, 00:11:36.423 "base_bdevs_list": [ 00:11:36.423 { 00:11:36.423 "name": null, 00:11:36.423 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:36.423 "is_configured": false, 00:11:36.423 "data_offset": 0, 00:11:36.423 "data_size": 63488 00:11:36.423 }, 00:11:36.423 { 00:11:36.423 "name": "BaseBdev2", 00:11:36.423 "uuid": "1319ea15-1150-52ac-9c79-a014f3df9b63", 00:11:36.423 "is_configured": true, 00:11:36.423 "data_offset": 2048, 00:11:36.423 "data_size": 63488 00:11:36.423 } 00:11:36.423 ] 00:11:36.423 }' 00:11:36.423 23:29:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:36.423 23:29:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:11:36.423 23:29:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:36.423 23:29:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:11:36.423 23:29:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:11:36.423 23:29:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@650 -- # local es=0 00:11:36.423 23:29:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:11:36.423 23:29:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:11:36.423 23:29:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:36.423 23:29:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:11:36.423 23:29:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:36.423 23:29:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:11:36.423 23:29:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.423 23:29:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:36.423 [2024-09-30 23:29:16.184397] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:36.423 [2024-09-30 23:29:16.184586] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:11:36.423 [2024-09-30 23:29:16.184599] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:11:36.423 request: 00:11:36.423 { 00:11:36.423 "base_bdev": "BaseBdev1", 00:11:36.423 "raid_bdev": "raid_bdev1", 00:11:36.423 "method": "bdev_raid_add_base_bdev", 00:11:36.423 "req_id": 1 00:11:36.423 } 00:11:36.423 Got JSON-RPC error response 00:11:36.423 response: 00:11:36.423 { 00:11:36.423 "code": -22, 00:11:36.423 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:11:36.423 } 00:11:36.423 23:29:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:11:36.423 23:29:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@653 -- # es=1 00:11:36.423 23:29:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:11:36.423 23:29:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:11:36.423 23:29:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:11:36.423 23:29:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:11:37.361 23:29:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:11:37.361 23:29:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:37.361 23:29:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:37.361 23:29:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:37.361 23:29:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:37.361 23:29:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:11:37.361 23:29:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:37.361 23:29:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:37.361 23:29:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:37.361 23:29:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:37.361 23:29:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:37.361 23:29:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:37.361 23:29:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:37.361 23:29:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:37.620 23:29:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:37.620 23:29:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:37.620 "name": "raid_bdev1", 00:11:37.620 "uuid": "ed0de12f-1afa-4032-96b3-90c171240376", 00:11:37.620 "strip_size_kb": 0, 00:11:37.620 "state": "online", 00:11:37.620 "raid_level": "raid1", 00:11:37.620 "superblock": true, 00:11:37.620 "num_base_bdevs": 2, 00:11:37.620 "num_base_bdevs_discovered": 1, 00:11:37.620 "num_base_bdevs_operational": 1, 00:11:37.620 "base_bdevs_list": [ 00:11:37.620 { 00:11:37.620 "name": null, 00:11:37.620 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:37.620 "is_configured": false, 00:11:37.620 "data_offset": 0, 00:11:37.620 "data_size": 63488 00:11:37.620 }, 00:11:37.620 { 00:11:37.620 "name": "BaseBdev2", 00:11:37.620 "uuid": "1319ea15-1150-52ac-9c79-a014f3df9b63", 00:11:37.620 "is_configured": true, 00:11:37.620 "data_offset": 2048, 00:11:37.620 "data_size": 63488 00:11:37.620 } 00:11:37.620 ] 00:11:37.620 }' 00:11:37.620 23:29:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:37.620 23:29:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:37.879 23:29:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:11:37.880 23:29:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:37.880 23:29:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:11:37.880 23:29:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:11:37.880 23:29:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:37.880 23:29:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:37.880 23:29:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:37.880 23:29:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:37.880 23:29:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:37.880 23:29:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:37.880 23:29:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:37.880 "name": "raid_bdev1", 00:11:37.880 "uuid": "ed0de12f-1afa-4032-96b3-90c171240376", 00:11:37.880 "strip_size_kb": 0, 00:11:37.880 "state": "online", 00:11:37.880 "raid_level": "raid1", 00:11:37.880 "superblock": true, 00:11:37.880 "num_base_bdevs": 2, 00:11:37.880 "num_base_bdevs_discovered": 1, 00:11:37.880 "num_base_bdevs_operational": 1, 00:11:37.880 "base_bdevs_list": [ 00:11:37.880 { 00:11:37.880 "name": null, 00:11:37.880 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:37.880 "is_configured": false, 00:11:37.880 "data_offset": 0, 00:11:37.880 "data_size": 63488 00:11:37.880 }, 00:11:37.880 { 00:11:37.880 "name": "BaseBdev2", 00:11:37.880 "uuid": "1319ea15-1150-52ac-9c79-a014f3df9b63", 00:11:37.880 "is_configured": true, 00:11:37.880 "data_offset": 2048, 00:11:37.880 "data_size": 63488 00:11:37.880 } 00:11:37.880 ] 00:11:37.880 }' 00:11:37.880 23:29:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:38.139 23:29:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:11:38.139 23:29:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:38.139 23:29:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:11:38.139 23:29:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 86469 00:11:38.139 23:29:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@950 -- # '[' -z 86469 ']' 00:11:38.139 23:29:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@954 -- # kill -0 86469 00:11:38.139 23:29:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@955 -- # uname 00:11:38.139 23:29:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:38.139 23:29:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 86469 00:11:38.139 killing process with pid 86469 00:11:38.139 Received shutdown signal, test time was about 60.000000 seconds 00:11:38.139 00:11:38.140 Latency(us) 00:11:38.140 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:38.140 =================================================================================================================== 00:11:38.140 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:11:38.140 23:29:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:38.140 23:29:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:38.140 23:29:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 86469' 00:11:38.140 23:29:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@969 -- # kill 86469 00:11:38.140 [2024-09-30 23:29:17.833747] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:38.140 [2024-09-30 23:29:17.833910] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:38.140 23:29:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@974 -- # wait 86469 00:11:38.140 [2024-09-30 23:29:17.833971] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:38.140 [2024-09-30 23:29:17.833980] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state offline 00:11:38.140 [2024-09-30 23:29:17.890156] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:38.709 23:29:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:11:38.709 00:11:38.709 real 0m22.308s 00:11:38.709 user 0m26.995s 00:11:38.709 sys 0m4.023s 00:11:38.709 23:29:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:38.709 23:29:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:38.709 ************************************ 00:11:38.709 END TEST raid_rebuild_test_sb 00:11:38.709 ************************************ 00:11:38.709 23:29:18 bdev_raid -- bdev/bdev_raid.sh@980 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 2 false true true 00:11:38.709 23:29:18 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:11:38.709 23:29:18 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:38.709 23:29:18 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:38.709 ************************************ 00:11:38.709 START TEST raid_rebuild_test_io 00:11:38.709 ************************************ 00:11:38.709 23:29:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 2 false true true 00:11:38.709 23:29:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:11:38.709 23:29:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:11:38.709 23:29:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:11:38.709 23:29:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:11:38.709 23:29:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:11:38.710 23:29:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:11:38.710 23:29:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:11:38.710 23:29:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:11:38.710 23:29:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:11:38.710 23:29:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:11:38.710 23:29:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:11:38.710 23:29:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:11:38.710 23:29:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:11:38.710 23:29:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:11:38.710 23:29:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:11:38.710 23:29:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:11:38.710 23:29:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:11:38.710 23:29:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:11:38.710 23:29:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:11:38.710 23:29:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:11:38.710 23:29:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:11:38.710 23:29:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:11:38.710 23:29:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:11:38.710 23:29:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@597 -- # raid_pid=87183 00:11:38.710 23:29:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:11:38.710 23:29:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 87183 00:11:38.710 23:29:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@831 -- # '[' -z 87183 ']' 00:11:38.710 23:29:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:38.710 23:29:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:38.710 23:29:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:38.710 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:38.710 23:29:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:38.710 23:29:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:38.710 [2024-09-30 23:29:18.434754] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:11:38.710 [2024-09-30 23:29:18.435003] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.ealI/O size of 3145728 is greater than zero copy threshold (65536). 00:11:38.710 Zero copy mechanism will not be used. 00:11:38.710 :6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87183 ] 00:11:38.970 [2024-09-30 23:29:18.600893] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:38.970 [2024-09-30 23:29:18.668385] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:11:38.970 [2024-09-30 23:29:18.743803] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:38.970 [2024-09-30 23:29:18.743957] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:39.546 23:29:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:39.546 23:29:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@864 -- # return 0 00:11:39.546 23:29:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:11:39.546 23:29:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:39.547 23:29:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.547 23:29:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:39.547 BaseBdev1_malloc 00:11:39.547 23:29:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.547 23:29:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:11:39.547 23:29:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.547 23:29:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:39.547 [2024-09-30 23:29:19.285918] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:11:39.547 [2024-09-30 23:29:19.285984] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:39.547 [2024-09-30 23:29:19.286010] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:11:39.547 [2024-09-30 23:29:19.286027] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:39.547 [2024-09-30 23:29:19.288473] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:39.547 [2024-09-30 23:29:19.288511] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:39.547 BaseBdev1 00:11:39.547 23:29:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.547 23:29:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:11:39.547 23:29:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:39.547 23:29:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.547 23:29:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:39.547 BaseBdev2_malloc 00:11:39.547 23:29:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.547 23:29:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:11:39.547 23:29:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.547 23:29:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:39.547 [2024-09-30 23:29:19.333850] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:11:39.547 [2024-09-30 23:29:19.333956] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:39.547 [2024-09-30 23:29:19.333996] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:11:39.547 [2024-09-30 23:29:19.334015] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:39.547 [2024-09-30 23:29:19.338478] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:39.547 [2024-09-30 23:29:19.338540] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:39.547 BaseBdev2 00:11:39.547 23:29:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.547 23:29:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:11:39.547 23:29:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.547 23:29:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:39.547 spare_malloc 00:11:39.547 23:29:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.547 23:29:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:11:39.547 23:29:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.547 23:29:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:39.547 spare_delay 00:11:39.547 23:29:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.547 23:29:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:11:39.547 23:29:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.547 23:29:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:39.547 [2024-09-30 23:29:19.381618] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:11:39.547 [2024-09-30 23:29:19.381710] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:39.547 [2024-09-30 23:29:19.381737] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:11:39.547 [2024-09-30 23:29:19.381745] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:39.547 [2024-09-30 23:29:19.384123] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:39.547 [2024-09-30 23:29:19.384159] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:11:39.547 spare 00:11:39.547 23:29:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.547 23:29:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:11:39.547 23:29:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.547 23:29:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:39.547 [2024-09-30 23:29:19.393649] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:39.547 [2024-09-30 23:29:19.395735] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:39.547 [2024-09-30 23:29:19.395819] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:11:39.547 [2024-09-30 23:29:19.395838] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:11:39.547 [2024-09-30 23:29:19.396104] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:11:39.547 [2024-09-30 23:29:19.396214] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:11:39.547 [2024-09-30 23:29:19.396226] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:11:39.547 [2024-09-30 23:29:19.396352] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:39.547 23:29:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.547 23:29:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:11:39.547 23:29:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:39.547 23:29:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:39.806 23:29:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:39.806 23:29:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:39.806 23:29:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:39.806 23:29:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:39.806 23:29:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:39.806 23:29:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:39.806 23:29:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:39.806 23:29:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:39.806 23:29:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:39.806 23:29:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.806 23:29:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:39.806 23:29:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.806 23:29:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:39.806 "name": "raid_bdev1", 00:11:39.806 "uuid": "80cef133-7c08-4e5a-b3b3-1cc8b408dece", 00:11:39.806 "strip_size_kb": 0, 00:11:39.806 "state": "online", 00:11:39.806 "raid_level": "raid1", 00:11:39.806 "superblock": false, 00:11:39.806 "num_base_bdevs": 2, 00:11:39.806 "num_base_bdevs_discovered": 2, 00:11:39.806 "num_base_bdevs_operational": 2, 00:11:39.806 "base_bdevs_list": [ 00:11:39.806 { 00:11:39.806 "name": "BaseBdev1", 00:11:39.806 "uuid": "fca45e52-2b8a-5803-82f5-87459c8491a1", 00:11:39.806 "is_configured": true, 00:11:39.806 "data_offset": 0, 00:11:39.806 "data_size": 65536 00:11:39.806 }, 00:11:39.806 { 00:11:39.806 "name": "BaseBdev2", 00:11:39.806 "uuid": "a4847ea9-2ab4-512e-87e0-b4683f80d371", 00:11:39.806 "is_configured": true, 00:11:39.806 "data_offset": 0, 00:11:39.806 "data_size": 65536 00:11:39.806 } 00:11:39.806 ] 00:11:39.806 }' 00:11:39.806 23:29:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:39.806 23:29:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:40.065 23:29:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:11:40.065 23:29:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:40.065 23:29:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:40.065 23:29:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:40.065 [2024-09-30 23:29:19.861091] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:40.065 23:29:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:40.065 23:29:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:11:40.065 23:29:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:11:40.065 23:29:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:40.065 23:29:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:40.065 23:29:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:40.324 23:29:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:40.324 23:29:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:11:40.324 23:29:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:11:40.324 23:29:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:11:40.325 23:29:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:11:40.325 23:29:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:40.325 23:29:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:40.325 [2024-09-30 23:29:19.936655] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:40.325 23:29:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:40.325 23:29:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:11:40.325 23:29:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:40.325 23:29:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:40.325 23:29:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:40.325 23:29:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:40.325 23:29:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:11:40.325 23:29:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:40.325 23:29:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:40.325 23:29:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:40.325 23:29:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:40.325 23:29:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:40.325 23:29:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:40.325 23:29:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:40.325 23:29:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:40.325 23:29:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:40.325 23:29:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:40.325 "name": "raid_bdev1", 00:11:40.325 "uuid": "80cef133-7c08-4e5a-b3b3-1cc8b408dece", 00:11:40.325 "strip_size_kb": 0, 00:11:40.325 "state": "online", 00:11:40.325 "raid_level": "raid1", 00:11:40.325 "superblock": false, 00:11:40.325 "num_base_bdevs": 2, 00:11:40.325 "num_base_bdevs_discovered": 1, 00:11:40.325 "num_base_bdevs_operational": 1, 00:11:40.325 "base_bdevs_list": [ 00:11:40.325 { 00:11:40.325 "name": null, 00:11:40.325 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:40.325 "is_configured": false, 00:11:40.325 "data_offset": 0, 00:11:40.325 "data_size": 65536 00:11:40.325 }, 00:11:40.325 { 00:11:40.325 "name": "BaseBdev2", 00:11:40.325 "uuid": "a4847ea9-2ab4-512e-87e0-b4683f80d371", 00:11:40.325 "is_configured": true, 00:11:40.325 "data_offset": 0, 00:11:40.325 "data_size": 65536 00:11:40.325 } 00:11:40.325 ] 00:11:40.325 }' 00:11:40.325 23:29:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:40.325 23:29:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:40.325 [2024-09-30 23:29:20.031822] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:11:40.325 I/O size of 3145728 is greater than zero copy threshold (65536). 00:11:40.325 Zero copy mechanism will not be used. 00:11:40.325 Running I/O for 60 seconds... 00:11:40.585 23:29:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:11:40.585 23:29:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:40.585 23:29:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:40.585 [2024-09-30 23:29:20.403966] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:11:40.585 23:29:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:40.585 23:29:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:11:40.845 [2024-09-30 23:29:20.446631] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:11:40.845 [2024-09-30 23:29:20.448962] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:11:40.845 [2024-09-30 23:29:20.558024] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:11:40.845 [2024-09-30 23:29:20.558508] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:11:41.104 [2024-09-30 23:29:20.767857] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:11:41.104 [2024-09-30 23:29:20.768385] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:11:41.363 173.00 IOPS, 519.00 MiB/s [2024-09-30 23:29:21.109931] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:11:41.623 [2024-09-30 23:29:21.322855] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:11:41.623 [2024-09-30 23:29:21.323122] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:11:41.623 23:29:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:41.623 23:29:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:41.623 23:29:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:41.623 23:29:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:41.623 23:29:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:41.623 23:29:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:41.623 23:29:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:41.623 23:29:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.623 23:29:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:41.623 23:29:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.883 23:29:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:41.883 "name": "raid_bdev1", 00:11:41.883 "uuid": "80cef133-7c08-4e5a-b3b3-1cc8b408dece", 00:11:41.883 "strip_size_kb": 0, 00:11:41.883 "state": "online", 00:11:41.883 "raid_level": "raid1", 00:11:41.883 "superblock": false, 00:11:41.883 "num_base_bdevs": 2, 00:11:41.883 "num_base_bdevs_discovered": 2, 00:11:41.883 "num_base_bdevs_operational": 2, 00:11:41.883 "process": { 00:11:41.883 "type": "rebuild", 00:11:41.883 "target": "spare", 00:11:41.883 "progress": { 00:11:41.883 "blocks": 10240, 00:11:41.883 "percent": 15 00:11:41.883 } 00:11:41.883 }, 00:11:41.883 "base_bdevs_list": [ 00:11:41.883 { 00:11:41.883 "name": "spare", 00:11:41.883 "uuid": "a7894754-41c1-5fbe-b20d-ff43b46e2547", 00:11:41.883 "is_configured": true, 00:11:41.883 "data_offset": 0, 00:11:41.883 "data_size": 65536 00:11:41.883 }, 00:11:41.883 { 00:11:41.883 "name": "BaseBdev2", 00:11:41.883 "uuid": "a4847ea9-2ab4-512e-87e0-b4683f80d371", 00:11:41.883 "is_configured": true, 00:11:41.883 "data_offset": 0, 00:11:41.883 "data_size": 65536 00:11:41.883 } 00:11:41.883 ] 00:11:41.883 }' 00:11:41.883 23:29:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:41.883 23:29:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:41.883 23:29:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:41.883 23:29:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:41.883 23:29:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:11:41.883 23:29:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.883 23:29:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:41.883 [2024-09-30 23:29:21.596508] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:11:41.883 [2024-09-30 23:29:21.658037] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:11:42.142 [2024-09-30 23:29:21.758492] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:11:42.142 [2024-09-30 23:29:21.760550] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:42.142 [2024-09-30 23:29:21.760641] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:11:42.142 [2024-09-30 23:29:21.760668] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:11:42.142 [2024-09-30 23:29:21.781080] bdev_raid.c:1970:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000005ee0 00:11:42.142 23:29:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.142 23:29:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:11:42.142 23:29:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:42.142 23:29:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:42.142 23:29:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:42.142 23:29:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:42.142 23:29:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:11:42.142 23:29:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:42.142 23:29:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:42.142 23:29:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:42.142 23:29:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:42.142 23:29:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:42.142 23:29:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:42.142 23:29:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.142 23:29:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:42.142 23:29:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.142 23:29:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:42.142 "name": "raid_bdev1", 00:11:42.142 "uuid": "80cef133-7c08-4e5a-b3b3-1cc8b408dece", 00:11:42.142 "strip_size_kb": 0, 00:11:42.142 "state": "online", 00:11:42.142 "raid_level": "raid1", 00:11:42.142 "superblock": false, 00:11:42.142 "num_base_bdevs": 2, 00:11:42.142 "num_base_bdevs_discovered": 1, 00:11:42.142 "num_base_bdevs_operational": 1, 00:11:42.142 "base_bdevs_list": [ 00:11:42.142 { 00:11:42.142 "name": null, 00:11:42.142 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:42.142 "is_configured": false, 00:11:42.142 "data_offset": 0, 00:11:42.142 "data_size": 65536 00:11:42.142 }, 00:11:42.142 { 00:11:42.142 "name": "BaseBdev2", 00:11:42.142 "uuid": "a4847ea9-2ab4-512e-87e0-b4683f80d371", 00:11:42.142 "is_configured": true, 00:11:42.142 "data_offset": 0, 00:11:42.142 "data_size": 65536 00:11:42.142 } 00:11:42.142 ] 00:11:42.142 }' 00:11:42.142 23:29:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:42.142 23:29:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:42.402 143.50 IOPS, 430.50 MiB/s 23:29:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:11:42.402 23:29:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:42.402 23:29:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:11:42.402 23:29:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:11:42.402 23:29:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:42.402 23:29:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:42.402 23:29:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:42.402 23:29:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.402 23:29:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:42.662 23:29:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.662 23:29:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:42.662 "name": "raid_bdev1", 00:11:42.662 "uuid": "80cef133-7c08-4e5a-b3b3-1cc8b408dece", 00:11:42.662 "strip_size_kb": 0, 00:11:42.662 "state": "online", 00:11:42.662 "raid_level": "raid1", 00:11:42.662 "superblock": false, 00:11:42.662 "num_base_bdevs": 2, 00:11:42.662 "num_base_bdevs_discovered": 1, 00:11:42.662 "num_base_bdevs_operational": 1, 00:11:42.662 "base_bdevs_list": [ 00:11:42.662 { 00:11:42.662 "name": null, 00:11:42.662 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:42.662 "is_configured": false, 00:11:42.662 "data_offset": 0, 00:11:42.662 "data_size": 65536 00:11:42.662 }, 00:11:42.662 { 00:11:42.662 "name": "BaseBdev2", 00:11:42.662 "uuid": "a4847ea9-2ab4-512e-87e0-b4683f80d371", 00:11:42.662 "is_configured": true, 00:11:42.662 "data_offset": 0, 00:11:42.662 "data_size": 65536 00:11:42.662 } 00:11:42.662 ] 00:11:42.662 }' 00:11:42.662 23:29:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:42.662 23:29:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:11:42.662 23:29:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:42.662 23:29:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:11:42.662 23:29:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:11:42.662 23:29:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.662 23:29:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:42.662 [2024-09-30 23:29:22.375019] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:11:42.662 23:29:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.662 23:29:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:11:42.662 [2024-09-30 23:29:22.422889] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:11:42.662 [2024-09-30 23:29:22.425098] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:11:42.921 [2024-09-30 23:29:22.542402] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:11:42.921 [2024-09-30 23:29:22.543062] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:11:42.921 [2024-09-30 23:29:22.756914] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:11:42.921 [2024-09-30 23:29:22.757341] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:11:43.489 144.33 IOPS, 433.00 MiB/s [2024-09-30 23:29:23.231132] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:11:43.748 23:29:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:43.748 23:29:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:43.748 23:29:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:43.748 23:29:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:43.748 23:29:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:43.748 23:29:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:43.748 23:29:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:43.748 23:29:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:43.748 23:29:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:43.748 23:29:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:43.748 23:29:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:43.748 "name": "raid_bdev1", 00:11:43.748 "uuid": "80cef133-7c08-4e5a-b3b3-1cc8b408dece", 00:11:43.748 "strip_size_kb": 0, 00:11:43.748 "state": "online", 00:11:43.748 "raid_level": "raid1", 00:11:43.748 "superblock": false, 00:11:43.748 "num_base_bdevs": 2, 00:11:43.748 "num_base_bdevs_discovered": 2, 00:11:43.748 "num_base_bdevs_operational": 2, 00:11:43.748 "process": { 00:11:43.748 "type": "rebuild", 00:11:43.748 "target": "spare", 00:11:43.748 "progress": { 00:11:43.748 "blocks": 10240, 00:11:43.748 "percent": 15 00:11:43.748 } 00:11:43.748 }, 00:11:43.748 "base_bdevs_list": [ 00:11:43.748 { 00:11:43.748 "name": "spare", 00:11:43.748 "uuid": "a7894754-41c1-5fbe-b20d-ff43b46e2547", 00:11:43.748 "is_configured": true, 00:11:43.748 "data_offset": 0, 00:11:43.748 "data_size": 65536 00:11:43.748 }, 00:11:43.748 { 00:11:43.748 "name": "BaseBdev2", 00:11:43.748 "uuid": "a4847ea9-2ab4-512e-87e0-b4683f80d371", 00:11:43.748 "is_configured": true, 00:11:43.748 "data_offset": 0, 00:11:43.748 "data_size": 65536 00:11:43.748 } 00:11:43.748 ] 00:11:43.748 }' 00:11:43.748 23:29:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:43.748 23:29:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:43.748 23:29:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:43.748 23:29:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:43.748 23:29:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:11:43.748 23:29:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:11:43.748 23:29:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:11:43.748 23:29:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:11:43.748 23:29:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # local timeout=324 00:11:43.748 23:29:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:11:43.748 23:29:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:43.748 23:29:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:43.748 23:29:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:43.748 23:29:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:43.748 23:29:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:43.748 23:29:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:43.748 23:29:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:43.748 23:29:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:43.748 23:29:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:43.748 [2024-09-30 23:29:23.579334] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:11:43.748 23:29:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:44.008 23:29:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:44.008 "name": "raid_bdev1", 00:11:44.008 "uuid": "80cef133-7c08-4e5a-b3b3-1cc8b408dece", 00:11:44.008 "strip_size_kb": 0, 00:11:44.008 "state": "online", 00:11:44.008 "raid_level": "raid1", 00:11:44.008 "superblock": false, 00:11:44.008 "num_base_bdevs": 2, 00:11:44.008 "num_base_bdevs_discovered": 2, 00:11:44.008 "num_base_bdevs_operational": 2, 00:11:44.008 "process": { 00:11:44.008 "type": "rebuild", 00:11:44.008 "target": "spare", 00:11:44.008 "progress": { 00:11:44.008 "blocks": 12288, 00:11:44.008 "percent": 18 00:11:44.008 } 00:11:44.008 }, 00:11:44.008 "base_bdevs_list": [ 00:11:44.008 { 00:11:44.008 "name": "spare", 00:11:44.008 "uuid": "a7894754-41c1-5fbe-b20d-ff43b46e2547", 00:11:44.008 "is_configured": true, 00:11:44.008 "data_offset": 0, 00:11:44.008 "data_size": 65536 00:11:44.008 }, 00:11:44.008 { 00:11:44.008 "name": "BaseBdev2", 00:11:44.008 "uuid": "a4847ea9-2ab4-512e-87e0-b4683f80d371", 00:11:44.008 "is_configured": true, 00:11:44.008 "data_offset": 0, 00:11:44.008 "data_size": 65536 00:11:44.008 } 00:11:44.008 ] 00:11:44.008 }' 00:11:44.008 23:29:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:44.008 23:29:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:44.008 23:29:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:44.008 [2024-09-30 23:29:23.686356] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:11:44.008 [2024-09-30 23:29:23.686734] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:11:44.008 23:29:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:44.008 23:29:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:11:44.267 [2024-09-30 23:29:24.024338] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:11:44.526 128.50 IOPS, 385.50 MiB/s [2024-09-30 23:29:24.251229] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:11:45.094 23:29:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:11:45.094 23:29:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:45.094 23:29:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:45.094 23:29:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:45.094 23:29:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:45.094 23:29:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:45.094 23:29:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:45.094 23:29:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:45.094 23:29:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:45.094 23:29:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:45.094 23:29:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:45.094 23:29:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:45.094 "name": "raid_bdev1", 00:11:45.094 "uuid": "80cef133-7c08-4e5a-b3b3-1cc8b408dece", 00:11:45.094 "strip_size_kb": 0, 00:11:45.094 "state": "online", 00:11:45.094 "raid_level": "raid1", 00:11:45.094 "superblock": false, 00:11:45.094 "num_base_bdevs": 2, 00:11:45.094 "num_base_bdevs_discovered": 2, 00:11:45.094 "num_base_bdevs_operational": 2, 00:11:45.094 "process": { 00:11:45.094 "type": "rebuild", 00:11:45.094 "target": "spare", 00:11:45.094 "progress": { 00:11:45.094 "blocks": 30720, 00:11:45.094 "percent": 46 00:11:45.094 } 00:11:45.094 }, 00:11:45.094 "base_bdevs_list": [ 00:11:45.094 { 00:11:45.094 "name": "spare", 00:11:45.094 "uuid": "a7894754-41c1-5fbe-b20d-ff43b46e2547", 00:11:45.094 "is_configured": true, 00:11:45.094 "data_offset": 0, 00:11:45.094 "data_size": 65536 00:11:45.094 }, 00:11:45.094 { 00:11:45.094 "name": "BaseBdev2", 00:11:45.094 "uuid": "a4847ea9-2ab4-512e-87e0-b4683f80d371", 00:11:45.094 "is_configured": true, 00:11:45.094 "data_offset": 0, 00:11:45.094 "data_size": 65536 00:11:45.094 } 00:11:45.094 ] 00:11:45.094 }' 00:11:45.094 23:29:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:45.094 23:29:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:45.094 23:29:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:45.094 23:29:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:45.094 23:29:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:11:45.094 [2024-09-30 23:29:24.926424] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:11:45.612 112.00 IOPS, 336.00 MiB/s [2024-09-30 23:29:25.264310] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:11:45.871 [2024-09-30 23:29:25.590507] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:11:45.871 [2024-09-30 23:29:25.701503] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:11:46.130 23:29:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:11:46.130 23:29:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:46.130 23:29:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:46.130 23:29:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:46.130 23:29:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:46.130 23:29:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:46.130 23:29:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:46.130 23:29:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:46.130 23:29:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:46.130 23:29:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:46.130 23:29:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:46.130 23:29:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:46.130 "name": "raid_bdev1", 00:11:46.130 "uuid": "80cef133-7c08-4e5a-b3b3-1cc8b408dece", 00:11:46.130 "strip_size_kb": 0, 00:11:46.130 "state": "online", 00:11:46.130 "raid_level": "raid1", 00:11:46.130 "superblock": false, 00:11:46.130 "num_base_bdevs": 2, 00:11:46.130 "num_base_bdevs_discovered": 2, 00:11:46.130 "num_base_bdevs_operational": 2, 00:11:46.130 "process": { 00:11:46.130 "type": "rebuild", 00:11:46.130 "target": "spare", 00:11:46.130 "progress": { 00:11:46.130 "blocks": 49152, 00:11:46.130 "percent": 75 00:11:46.130 } 00:11:46.130 }, 00:11:46.130 "base_bdevs_list": [ 00:11:46.130 { 00:11:46.130 "name": "spare", 00:11:46.130 "uuid": "a7894754-41c1-5fbe-b20d-ff43b46e2547", 00:11:46.130 "is_configured": true, 00:11:46.130 "data_offset": 0, 00:11:46.130 "data_size": 65536 00:11:46.130 }, 00:11:46.130 { 00:11:46.130 "name": "BaseBdev2", 00:11:46.130 "uuid": "a4847ea9-2ab4-512e-87e0-b4683f80d371", 00:11:46.130 "is_configured": true, 00:11:46.130 "data_offset": 0, 00:11:46.130 "data_size": 65536 00:11:46.130 } 00:11:46.130 ] 00:11:46.130 }' 00:11:46.130 23:29:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:46.130 [2024-09-30 23:29:25.928027] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 51200 offset_begin: 49152 offset_end: 55296 00:11:46.130 23:29:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:46.130 23:29:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:46.389 23:29:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:46.389 23:29:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:11:46.648 99.17 IOPS, 297.50 MiB/s [2024-09-30 23:29:26.365929] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 57344 offset_begin: 55296 offset_end: 61440 00:11:46.648 [2024-09-30 23:29:26.476057] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 59392 offset_begin: 55296 offset_end: 61440 00:11:47.216 [2024-09-30 23:29:26.813887] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:11:47.216 [2024-09-30 23:29:26.913682] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:11:47.216 [2024-09-30 23:29:26.915146] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:47.216 23:29:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:11:47.216 23:29:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:47.216 23:29:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:47.216 23:29:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:47.216 23:29:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:47.216 23:29:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:47.216 23:29:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:47.216 23:29:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.216 23:29:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:47.216 23:29:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:47.216 23:29:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.217 23:29:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:47.217 "name": "raid_bdev1", 00:11:47.217 "uuid": "80cef133-7c08-4e5a-b3b3-1cc8b408dece", 00:11:47.217 "strip_size_kb": 0, 00:11:47.217 "state": "online", 00:11:47.217 "raid_level": "raid1", 00:11:47.217 "superblock": false, 00:11:47.217 "num_base_bdevs": 2, 00:11:47.217 "num_base_bdevs_discovered": 2, 00:11:47.217 "num_base_bdevs_operational": 2, 00:11:47.217 "base_bdevs_list": [ 00:11:47.217 { 00:11:47.217 "name": "spare", 00:11:47.217 "uuid": "a7894754-41c1-5fbe-b20d-ff43b46e2547", 00:11:47.217 "is_configured": true, 00:11:47.217 "data_offset": 0, 00:11:47.217 "data_size": 65536 00:11:47.217 }, 00:11:47.217 { 00:11:47.217 "name": "BaseBdev2", 00:11:47.217 "uuid": "a4847ea9-2ab4-512e-87e0-b4683f80d371", 00:11:47.217 "is_configured": true, 00:11:47.217 "data_offset": 0, 00:11:47.217 "data_size": 65536 00:11:47.217 } 00:11:47.217 ] 00:11:47.217 }' 00:11:47.217 23:29:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:47.476 90.71 IOPS, 272.14 MiB/s 23:29:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:11:47.476 23:29:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:47.476 23:29:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:11:47.476 23:29:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@709 -- # break 00:11:47.476 23:29:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:11:47.476 23:29:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:47.476 23:29:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:11:47.476 23:29:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:11:47.476 23:29:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:47.476 23:29:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:47.476 23:29:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:47.476 23:29:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.476 23:29:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:47.476 23:29:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.476 23:29:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:47.476 "name": "raid_bdev1", 00:11:47.476 "uuid": "80cef133-7c08-4e5a-b3b3-1cc8b408dece", 00:11:47.476 "strip_size_kb": 0, 00:11:47.476 "state": "online", 00:11:47.476 "raid_level": "raid1", 00:11:47.476 "superblock": false, 00:11:47.476 "num_base_bdevs": 2, 00:11:47.476 "num_base_bdevs_discovered": 2, 00:11:47.476 "num_base_bdevs_operational": 2, 00:11:47.476 "base_bdevs_list": [ 00:11:47.476 { 00:11:47.476 "name": "spare", 00:11:47.476 "uuid": "a7894754-41c1-5fbe-b20d-ff43b46e2547", 00:11:47.476 "is_configured": true, 00:11:47.476 "data_offset": 0, 00:11:47.476 "data_size": 65536 00:11:47.476 }, 00:11:47.476 { 00:11:47.476 "name": "BaseBdev2", 00:11:47.476 "uuid": "a4847ea9-2ab4-512e-87e0-b4683f80d371", 00:11:47.476 "is_configured": true, 00:11:47.476 "data_offset": 0, 00:11:47.476 "data_size": 65536 00:11:47.476 } 00:11:47.476 ] 00:11:47.476 }' 00:11:47.476 23:29:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:47.476 23:29:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:11:47.476 23:29:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:47.476 23:29:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:11:47.476 23:29:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:11:47.476 23:29:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:47.476 23:29:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:47.476 23:29:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:47.476 23:29:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:47.476 23:29:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:47.476 23:29:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:47.476 23:29:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:47.476 23:29:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:47.476 23:29:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:47.476 23:29:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:47.476 23:29:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.476 23:29:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:47.476 23:29:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:47.476 23:29:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.476 23:29:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:47.476 "name": "raid_bdev1", 00:11:47.476 "uuid": "80cef133-7c08-4e5a-b3b3-1cc8b408dece", 00:11:47.476 "strip_size_kb": 0, 00:11:47.476 "state": "online", 00:11:47.476 "raid_level": "raid1", 00:11:47.476 "superblock": false, 00:11:47.476 "num_base_bdevs": 2, 00:11:47.476 "num_base_bdevs_discovered": 2, 00:11:47.476 "num_base_bdevs_operational": 2, 00:11:47.476 "base_bdevs_list": [ 00:11:47.476 { 00:11:47.476 "name": "spare", 00:11:47.476 "uuid": "a7894754-41c1-5fbe-b20d-ff43b46e2547", 00:11:47.476 "is_configured": true, 00:11:47.476 "data_offset": 0, 00:11:47.476 "data_size": 65536 00:11:47.476 }, 00:11:47.476 { 00:11:47.476 "name": "BaseBdev2", 00:11:47.476 "uuid": "a4847ea9-2ab4-512e-87e0-b4683f80d371", 00:11:47.476 "is_configured": true, 00:11:47.476 "data_offset": 0, 00:11:47.476 "data_size": 65536 00:11:47.476 } 00:11:47.476 ] 00:11:47.476 }' 00:11:47.476 23:29:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:47.477 23:29:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:48.045 23:29:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:48.045 23:29:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.045 23:29:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:48.045 [2024-09-30 23:29:27.740401] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:48.045 [2024-09-30 23:29:27.740494] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:48.045 00:11:48.045 Latency(us) 00:11:48.045 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:48.045 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:11:48.045 raid_bdev1 : 7.77 86.07 258.20 0.00 0.00 15580.68 273.66 114473.36 00:11:48.045 =================================================================================================================== 00:11:48.045 Total : 86.07 258.20 0.00 0.00 15580.68 273.66 114473.36 00:11:48.045 [2024-09-30 23:29:27.796227] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:48.045 [2024-09-30 23:29:27.796308] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:48.045 [2024-09-30 23:29:27.796431] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:48.045 [2024-09-30 23:29:27.796480] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:11:48.045 { 00:11:48.045 "results": [ 00:11:48.045 { 00:11:48.045 "job": "raid_bdev1", 00:11:48.045 "core_mask": "0x1", 00:11:48.045 "workload": "randrw", 00:11:48.045 "percentage": 50, 00:11:48.045 "status": "finished", 00:11:48.045 "queue_depth": 2, 00:11:48.045 "io_size": 3145728, 00:11:48.045 "runtime": 7.773101, 00:11:48.045 "iops": 86.06603722246759, 00:11:48.045 "mibps": 258.19811166740277, 00:11:48.045 "io_failed": 0, 00:11:48.045 "io_timeout": 0, 00:11:48.045 "avg_latency_us": 15580.682590844704, 00:11:48.045 "min_latency_us": 273.6628820960699, 00:11:48.045 "max_latency_us": 114473.36244541485 00:11:48.045 } 00:11:48.045 ], 00:11:48.045 "core_count": 1 00:11:48.045 } 00:11:48.045 23:29:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.045 23:29:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # jq length 00:11:48.045 23:29:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:48.045 23:29:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.045 23:29:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:48.045 23:29:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.045 23:29:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:11:48.045 23:29:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:11:48.045 23:29:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:11:48.045 23:29:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:11:48.045 23:29:27 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:11:48.045 23:29:27 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:11:48.045 23:29:27 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:11:48.045 23:29:27 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:11:48.045 23:29:27 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:11:48.045 23:29:27 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:11:48.045 23:29:27 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:11:48.045 23:29:27 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:11:48.045 23:29:27 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:11:48.304 /dev/nbd0 00:11:48.304 23:29:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:11:48.304 23:29:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:11:48.304 23:29:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:11:48.304 23:29:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # local i 00:11:48.304 23:29:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:11:48.304 23:29:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:11:48.304 23:29:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:11:48.304 23:29:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # break 00:11:48.305 23:29:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:11:48.305 23:29:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:11:48.305 23:29:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:48.305 1+0 records in 00:11:48.305 1+0 records out 00:11:48.305 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000275804 s, 14.9 MB/s 00:11:48.305 23:29:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:48.305 23:29:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # size=4096 00:11:48.305 23:29:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:48.305 23:29:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:11:48.305 23:29:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # return 0 00:11:48.305 23:29:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:48.305 23:29:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:11:48.305 23:29:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:11:48.305 23:29:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev2 ']' 00:11:48.305 23:29:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev2 /dev/nbd1 00:11:48.305 23:29:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:11:48.305 23:29:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:11:48.305 23:29:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:11:48.305 23:29:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:11:48.305 23:29:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:11:48.305 23:29:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:11:48.305 23:29:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:11:48.305 23:29:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:11:48.305 23:29:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:11:48.564 /dev/nbd1 00:11:48.564 23:29:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:11:48.564 23:29:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:11:48.564 23:29:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:11:48.564 23:29:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # local i 00:11:48.564 23:29:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:11:48.564 23:29:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:11:48.564 23:29:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:11:48.564 23:29:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # break 00:11:48.564 23:29:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:11:48.564 23:29:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:11:48.564 23:29:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:48.564 1+0 records in 00:11:48.564 1+0 records out 00:11:48.564 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000350761 s, 11.7 MB/s 00:11:48.564 23:29:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:48.564 23:29:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # size=4096 00:11:48.564 23:29:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:48.564 23:29:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:11:48.564 23:29:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # return 0 00:11:48.564 23:29:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:48.564 23:29:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:11:48.564 23:29:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:11:48.830 23:29:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:11:48.830 23:29:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:11:48.830 23:29:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:11:48.830 23:29:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:11:48.830 23:29:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:11:48.830 23:29:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:48.830 23:29:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:11:48.830 23:29:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:11:48.830 23:29:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:11:48.830 23:29:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:11:48.830 23:29:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:48.830 23:29:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:48.830 23:29:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:11:48.830 23:29:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:11:48.830 23:29:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:11:48.830 23:29:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:11:48.830 23:29:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:11:48.830 23:29:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:11:48.830 23:29:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:11:48.830 23:29:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:11:48.830 23:29:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:48.830 23:29:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:11:49.106 23:29:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:11:49.106 23:29:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:11:49.106 23:29:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:11:49.106 23:29:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:49.106 23:29:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:49.106 23:29:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:11:49.106 23:29:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:11:49.106 23:29:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:11:49.106 23:29:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:11:49.106 23:29:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@784 -- # killprocess 87183 00:11:49.106 23:29:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@950 -- # '[' -z 87183 ']' 00:11:49.106 23:29:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@954 -- # kill -0 87183 00:11:49.106 23:29:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@955 -- # uname 00:11:49.106 23:29:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:49.106 23:29:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 87183 00:11:49.106 23:29:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:49.106 23:29:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:49.106 23:29:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@968 -- # echo 'killing process with pid 87183' 00:11:49.106 killing process with pid 87183 00:11:49.106 Received shutdown signal, test time was about 8.923435 seconds 00:11:49.106 00:11:49.106 Latency(us) 00:11:49.106 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:49.106 =================================================================================================================== 00:11:49.106 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:11:49.106 23:29:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@969 -- # kill 87183 00:11:49.106 [2024-09-30 23:29:28.940360] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:49.106 23:29:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@974 -- # wait 87183 00:11:49.379 [2024-09-30 23:29:28.988126] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:49.637 23:29:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@786 -- # return 0 00:11:49.637 00:11:49.637 real 0m11.026s 00:11:49.637 user 0m14.080s 00:11:49.637 sys 0m1.575s 00:11:49.637 23:29:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:49.637 23:29:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:49.637 ************************************ 00:11:49.637 END TEST raid_rebuild_test_io 00:11:49.637 ************************************ 00:11:49.637 23:29:29 bdev_raid -- bdev/bdev_raid.sh@981 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 2 true true true 00:11:49.637 23:29:29 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:11:49.637 23:29:29 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:49.637 23:29:29 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:49.637 ************************************ 00:11:49.637 START TEST raid_rebuild_test_sb_io 00:11:49.637 ************************************ 00:11:49.637 23:29:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 2 true true true 00:11:49.638 23:29:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:11:49.638 23:29:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:11:49.638 23:29:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:11:49.638 23:29:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:11:49.638 23:29:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:11:49.638 23:29:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:11:49.638 23:29:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:11:49.638 23:29:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:11:49.638 23:29:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:11:49.638 23:29:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:11:49.638 23:29:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:11:49.638 23:29:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:11:49.638 23:29:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:11:49.638 23:29:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:11:49.638 23:29:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:11:49.638 23:29:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:11:49.638 23:29:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:11:49.638 23:29:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:11:49.638 23:29:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:11:49.638 23:29:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:11:49.638 23:29:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:11:49.638 23:29:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:11:49.638 23:29:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:11:49.638 23:29:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:11:49.638 23:29:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@597 -- # raid_pid=87548 00:11:49.638 23:29:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:11:49.638 23:29:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 87548 00:11:49.638 23:29:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@831 -- # '[' -z 87548 ']' 00:11:49.638 23:29:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:49.638 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:49.638 23:29:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:49.638 23:29:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:49.638 23:29:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:49.638 23:29:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:49.897 I/O size of 3145728 is greater than zero copy threshold (65536). 00:11:49.897 Zero copy mechanism will not be used. 00:11:49.897 [2024-09-30 23:29:29.530812] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:11:49.897 [2024-09-30 23:29:29.530948] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87548 ] 00:11:49.897 [2024-09-30 23:29:29.690091] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:50.156 [2024-09-30 23:29:29.759863] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:11:50.156 [2024-09-30 23:29:29.835523] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:50.156 [2024-09-30 23:29:29.835643] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:50.725 23:29:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:50.725 23:29:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@864 -- # return 0 00:11:50.725 23:29:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:11:50.725 23:29:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:50.725 23:29:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.725 23:29:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:50.725 BaseBdev1_malloc 00:11:50.725 23:29:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.725 23:29:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:11:50.725 23:29:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.725 23:29:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:50.725 [2024-09-30 23:29:30.377540] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:11:50.725 [2024-09-30 23:29:30.377605] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:50.725 [2024-09-30 23:29:30.377628] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:11:50.725 [2024-09-30 23:29:30.377650] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:50.725 [2024-09-30 23:29:30.380092] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:50.725 [2024-09-30 23:29:30.380126] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:50.725 BaseBdev1 00:11:50.725 23:29:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.725 23:29:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:11:50.725 23:29:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:50.725 23:29:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.725 23:29:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:50.725 BaseBdev2_malloc 00:11:50.725 23:29:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.725 23:29:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:11:50.725 23:29:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.725 23:29:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:50.725 [2024-09-30 23:29:30.429052] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:11:50.725 [2024-09-30 23:29:30.429293] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:50.725 [2024-09-30 23:29:30.429359] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:11:50.725 [2024-09-30 23:29:30.429388] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:50.725 [2024-09-30 23:29:30.433913] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:50.725 [2024-09-30 23:29:30.433972] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:50.725 BaseBdev2 00:11:50.725 23:29:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.725 23:29:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:11:50.725 23:29:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.726 23:29:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:50.726 spare_malloc 00:11:50.726 23:29:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.726 23:29:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:11:50.726 23:29:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.726 23:29:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:50.726 spare_delay 00:11:50.726 23:29:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.726 23:29:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:11:50.726 23:29:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.726 23:29:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:50.726 [2024-09-30 23:29:30.477111] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:11:50.726 [2024-09-30 23:29:30.477164] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:50.726 [2024-09-30 23:29:30.477186] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:11:50.726 [2024-09-30 23:29:30.477195] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:50.726 [2024-09-30 23:29:30.479581] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:50.726 [2024-09-30 23:29:30.479617] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:11:50.726 spare 00:11:50.726 23:29:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.726 23:29:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:11:50.726 23:29:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.726 23:29:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:50.726 [2024-09-30 23:29:30.489128] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:50.726 [2024-09-30 23:29:30.491142] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:50.726 [2024-09-30 23:29:30.491323] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:11:50.726 [2024-09-30 23:29:30.491336] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:50.726 [2024-09-30 23:29:30.491579] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:11:50.726 [2024-09-30 23:29:30.491711] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:11:50.726 [2024-09-30 23:29:30.491724] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:11:50.726 [2024-09-30 23:29:30.491846] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:50.726 23:29:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.726 23:29:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:11:50.726 23:29:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:50.726 23:29:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:50.726 23:29:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:50.726 23:29:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:50.726 23:29:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:50.726 23:29:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:50.726 23:29:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:50.726 23:29:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:50.726 23:29:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:50.726 23:29:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:50.726 23:29:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:50.726 23:29:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.726 23:29:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:50.726 23:29:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.726 23:29:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:50.726 "name": "raid_bdev1", 00:11:50.726 "uuid": "71cbe803-7349-45a6-b388-9393f1660ff8", 00:11:50.726 "strip_size_kb": 0, 00:11:50.726 "state": "online", 00:11:50.726 "raid_level": "raid1", 00:11:50.726 "superblock": true, 00:11:50.726 "num_base_bdevs": 2, 00:11:50.726 "num_base_bdevs_discovered": 2, 00:11:50.726 "num_base_bdevs_operational": 2, 00:11:50.726 "base_bdevs_list": [ 00:11:50.726 { 00:11:50.726 "name": "BaseBdev1", 00:11:50.726 "uuid": "a07f35f6-b91b-5dbe-8a6f-d8b750f9efab", 00:11:50.726 "is_configured": true, 00:11:50.726 "data_offset": 2048, 00:11:50.726 "data_size": 63488 00:11:50.726 }, 00:11:50.726 { 00:11:50.726 "name": "BaseBdev2", 00:11:50.726 "uuid": "d67ad16a-3e90-5dfc-ba7e-6effb9861e98", 00:11:50.726 "is_configured": true, 00:11:50.726 "data_offset": 2048, 00:11:50.726 "data_size": 63488 00:11:50.726 } 00:11:50.726 ] 00:11:50.726 }' 00:11:50.726 23:29:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:50.726 23:29:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:51.295 23:29:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:11:51.295 23:29:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:51.295 23:29:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.295 23:29:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:51.295 [2024-09-30 23:29:30.932707] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:51.295 23:29:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:51.295 23:29:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:11:51.295 23:29:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:11:51.295 23:29:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:51.295 23:29:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.295 23:29:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:51.295 23:29:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:51.295 23:29:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:11:51.295 23:29:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:11:51.295 23:29:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:11:51.295 23:29:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:11:51.295 23:29:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.295 23:29:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:51.295 [2024-09-30 23:29:31.008242] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:51.295 23:29:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:51.295 23:29:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:11:51.295 23:29:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:51.295 23:29:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:51.295 23:29:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:51.295 23:29:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:51.295 23:29:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:11:51.295 23:29:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:51.295 23:29:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:51.295 23:29:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:51.295 23:29:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:51.295 23:29:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:51.295 23:29:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:51.295 23:29:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.295 23:29:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:51.295 23:29:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:51.295 23:29:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:51.295 "name": "raid_bdev1", 00:11:51.295 "uuid": "71cbe803-7349-45a6-b388-9393f1660ff8", 00:11:51.295 "strip_size_kb": 0, 00:11:51.295 "state": "online", 00:11:51.295 "raid_level": "raid1", 00:11:51.295 "superblock": true, 00:11:51.295 "num_base_bdevs": 2, 00:11:51.295 "num_base_bdevs_discovered": 1, 00:11:51.295 "num_base_bdevs_operational": 1, 00:11:51.295 "base_bdevs_list": [ 00:11:51.295 { 00:11:51.295 "name": null, 00:11:51.295 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:51.295 "is_configured": false, 00:11:51.295 "data_offset": 0, 00:11:51.295 "data_size": 63488 00:11:51.295 }, 00:11:51.295 { 00:11:51.295 "name": "BaseBdev2", 00:11:51.295 "uuid": "d67ad16a-3e90-5dfc-ba7e-6effb9861e98", 00:11:51.295 "is_configured": true, 00:11:51.295 "data_offset": 2048, 00:11:51.295 "data_size": 63488 00:11:51.295 } 00:11:51.295 ] 00:11:51.295 }' 00:11:51.295 23:29:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:51.295 23:29:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:51.295 [2024-09-30 23:29:31.099444] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:11:51.295 I/O size of 3145728 is greater than zero copy threshold (65536). 00:11:51.295 Zero copy mechanism will not be used. 00:11:51.295 Running I/O for 60 seconds... 00:11:51.865 23:29:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:11:51.865 23:29:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.865 23:29:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:51.865 [2024-09-30 23:29:31.420291] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:11:51.865 23:29:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:51.865 23:29:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:11:51.865 [2024-09-30 23:29:31.466283] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:11:51.865 [2024-09-30 23:29:31.468536] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:11:51.865 [2024-09-30 23:29:31.581278] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:11:51.865 [2024-09-30 23:29:31.581978] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:11:52.125 [2024-09-30 23:29:31.784433] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:11:52.125 [2024-09-30 23:29:31.784846] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:11:52.384 [2024-09-30 23:29:32.019936] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:11:52.384 [2024-09-30 23:29:32.020347] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:11:52.384 150.00 IOPS, 450.00 MiB/s [2024-09-30 23:29:32.126755] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:11:52.384 [2024-09-30 23:29:32.126901] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:11:52.643 [2024-09-30 23:29:32.447951] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:11:52.643 [2024-09-30 23:29:32.448448] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:11:52.643 23:29:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:52.643 23:29:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:52.643 23:29:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:52.643 23:29:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:52.643 23:29:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:52.643 23:29:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:52.643 23:29:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:52.643 23:29:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.643 23:29:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:52.643 23:29:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.903 23:29:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:52.903 "name": "raid_bdev1", 00:11:52.903 "uuid": "71cbe803-7349-45a6-b388-9393f1660ff8", 00:11:52.903 "strip_size_kb": 0, 00:11:52.903 "state": "online", 00:11:52.903 "raid_level": "raid1", 00:11:52.903 "superblock": true, 00:11:52.903 "num_base_bdevs": 2, 00:11:52.903 "num_base_bdevs_discovered": 2, 00:11:52.903 "num_base_bdevs_operational": 2, 00:11:52.903 "process": { 00:11:52.903 "type": "rebuild", 00:11:52.903 "target": "spare", 00:11:52.903 "progress": { 00:11:52.903 "blocks": 14336, 00:11:52.903 "percent": 22 00:11:52.903 } 00:11:52.903 }, 00:11:52.903 "base_bdevs_list": [ 00:11:52.903 { 00:11:52.903 "name": "spare", 00:11:52.903 "uuid": "b645bddf-84a0-5aab-8227-e304bd9ccc75", 00:11:52.903 "is_configured": true, 00:11:52.903 "data_offset": 2048, 00:11:52.903 "data_size": 63488 00:11:52.903 }, 00:11:52.903 { 00:11:52.903 "name": "BaseBdev2", 00:11:52.903 "uuid": "d67ad16a-3e90-5dfc-ba7e-6effb9861e98", 00:11:52.903 "is_configured": true, 00:11:52.903 "data_offset": 2048, 00:11:52.903 "data_size": 63488 00:11:52.903 } 00:11:52.903 ] 00:11:52.903 }' 00:11:52.903 23:29:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:52.903 23:29:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:52.903 23:29:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:52.903 23:29:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:52.903 23:29:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:11:52.903 23:29:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.903 23:29:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:52.903 [2024-09-30 23:29:32.586735] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:11:52.903 [2024-09-30 23:29:32.655373] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:11:52.903 [2024-09-30 23:29:32.655589] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:11:53.162 [2024-09-30 23:29:32.757045] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:11:53.162 [2024-09-30 23:29:32.771390] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:53.162 [2024-09-30 23:29:32.771435] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:11:53.162 [2024-09-30 23:29:32.771449] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:11:53.163 [2024-09-30 23:29:32.786790] bdev_raid.c:1970:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000005ee0 00:11:53.163 23:29:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:53.163 23:29:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:11:53.163 23:29:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:53.163 23:29:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:53.163 23:29:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:53.163 23:29:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:53.163 23:29:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:11:53.163 23:29:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:53.163 23:29:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:53.163 23:29:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:53.163 23:29:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:53.163 23:29:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:53.163 23:29:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:53.163 23:29:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:53.163 23:29:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:53.163 23:29:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:53.163 23:29:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:53.163 "name": "raid_bdev1", 00:11:53.163 "uuid": "71cbe803-7349-45a6-b388-9393f1660ff8", 00:11:53.163 "strip_size_kb": 0, 00:11:53.163 "state": "online", 00:11:53.163 "raid_level": "raid1", 00:11:53.163 "superblock": true, 00:11:53.163 "num_base_bdevs": 2, 00:11:53.163 "num_base_bdevs_discovered": 1, 00:11:53.163 "num_base_bdevs_operational": 1, 00:11:53.163 "base_bdevs_list": [ 00:11:53.163 { 00:11:53.163 "name": null, 00:11:53.163 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:53.163 "is_configured": false, 00:11:53.163 "data_offset": 0, 00:11:53.163 "data_size": 63488 00:11:53.163 }, 00:11:53.163 { 00:11:53.163 "name": "BaseBdev2", 00:11:53.163 "uuid": "d67ad16a-3e90-5dfc-ba7e-6effb9861e98", 00:11:53.163 "is_configured": true, 00:11:53.163 "data_offset": 2048, 00:11:53.163 "data_size": 63488 00:11:53.163 } 00:11:53.163 ] 00:11:53.163 }' 00:11:53.163 23:29:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:53.163 23:29:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:53.422 131.50 IOPS, 394.50 MiB/s 23:29:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:11:53.422 23:29:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:53.422 23:29:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:11:53.422 23:29:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:11:53.422 23:29:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:53.422 23:29:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:53.422 23:29:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:53.422 23:29:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:53.422 23:29:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:53.680 23:29:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:53.680 23:29:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:53.680 "name": "raid_bdev1", 00:11:53.680 "uuid": "71cbe803-7349-45a6-b388-9393f1660ff8", 00:11:53.680 "strip_size_kb": 0, 00:11:53.680 "state": "online", 00:11:53.680 "raid_level": "raid1", 00:11:53.680 "superblock": true, 00:11:53.680 "num_base_bdevs": 2, 00:11:53.680 "num_base_bdevs_discovered": 1, 00:11:53.680 "num_base_bdevs_operational": 1, 00:11:53.680 "base_bdevs_list": [ 00:11:53.680 { 00:11:53.680 "name": null, 00:11:53.680 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:53.680 "is_configured": false, 00:11:53.680 "data_offset": 0, 00:11:53.680 "data_size": 63488 00:11:53.680 }, 00:11:53.680 { 00:11:53.680 "name": "BaseBdev2", 00:11:53.680 "uuid": "d67ad16a-3e90-5dfc-ba7e-6effb9861e98", 00:11:53.680 "is_configured": true, 00:11:53.680 "data_offset": 2048, 00:11:53.680 "data_size": 63488 00:11:53.680 } 00:11:53.680 ] 00:11:53.680 }' 00:11:53.680 23:29:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:53.680 23:29:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:11:53.680 23:29:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:53.680 23:29:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:11:53.680 23:29:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:11:53.680 23:29:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:53.680 23:29:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:53.680 [2024-09-30 23:29:33.381762] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:11:53.680 23:29:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:53.680 23:29:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:11:53.680 [2024-09-30 23:29:33.438010] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:11:53.680 [2024-09-30 23:29:33.440228] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:11:53.938 [2024-09-30 23:29:33.558081] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:11:53.938 [2024-09-30 23:29:33.558778] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:11:53.938 [2024-09-30 23:29:33.779394] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:11:53.938 [2024-09-30 23:29:33.779906] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:11:54.503 141.00 IOPS, 423.00 MiB/s [2024-09-30 23:29:34.117823] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:11:54.503 [2024-09-30 23:29:34.118528] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:11:54.503 [2024-09-30 23:29:34.231402] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:11:54.503 [2024-09-30 23:29:34.231806] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:11:54.762 23:29:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:54.762 23:29:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:54.762 23:29:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:54.762 23:29:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:54.762 23:29:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:54.762 23:29:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:54.762 23:29:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:54.762 23:29:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.762 23:29:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:54.762 23:29:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.762 23:29:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:54.762 "name": "raid_bdev1", 00:11:54.762 "uuid": "71cbe803-7349-45a6-b388-9393f1660ff8", 00:11:54.762 "strip_size_kb": 0, 00:11:54.762 "state": "online", 00:11:54.762 "raid_level": "raid1", 00:11:54.762 "superblock": true, 00:11:54.762 "num_base_bdevs": 2, 00:11:54.762 "num_base_bdevs_discovered": 2, 00:11:54.762 "num_base_bdevs_operational": 2, 00:11:54.762 "process": { 00:11:54.762 "type": "rebuild", 00:11:54.762 "target": "spare", 00:11:54.762 "progress": { 00:11:54.762 "blocks": 10240, 00:11:54.762 "percent": 16 00:11:54.762 } 00:11:54.762 }, 00:11:54.762 "base_bdevs_list": [ 00:11:54.762 { 00:11:54.762 "name": "spare", 00:11:54.762 "uuid": "b645bddf-84a0-5aab-8227-e304bd9ccc75", 00:11:54.762 "is_configured": true, 00:11:54.762 "data_offset": 2048, 00:11:54.762 "data_size": 63488 00:11:54.762 }, 00:11:54.762 { 00:11:54.762 "name": "BaseBdev2", 00:11:54.762 "uuid": "d67ad16a-3e90-5dfc-ba7e-6effb9861e98", 00:11:54.762 "is_configured": true, 00:11:54.762 "data_offset": 2048, 00:11:54.762 "data_size": 63488 00:11:54.762 } 00:11:54.762 ] 00:11:54.762 }' 00:11:54.762 23:29:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:54.762 23:29:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:54.762 23:29:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:54.762 23:29:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:54.762 23:29:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:11:54.762 23:29:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:11:54.762 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:11:54.762 23:29:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:11:54.762 23:29:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:11:54.762 23:29:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:11:54.762 23:29:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # local timeout=335 00:11:54.762 23:29:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:11:54.762 23:29:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:54.762 23:29:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:54.762 23:29:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:54.762 23:29:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:54.762 23:29:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:54.762 [2024-09-30 23:29:34.568197] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:11:54.762 23:29:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:54.762 23:29:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.762 23:29:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:54.762 23:29:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:54.762 23:29:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.762 23:29:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:54.762 "name": "raid_bdev1", 00:11:54.762 "uuid": "71cbe803-7349-45a6-b388-9393f1660ff8", 00:11:54.762 "strip_size_kb": 0, 00:11:54.762 "state": "online", 00:11:54.762 "raid_level": "raid1", 00:11:54.762 "superblock": true, 00:11:54.762 "num_base_bdevs": 2, 00:11:54.762 "num_base_bdevs_discovered": 2, 00:11:54.762 "num_base_bdevs_operational": 2, 00:11:54.762 "process": { 00:11:54.762 "type": "rebuild", 00:11:54.762 "target": "spare", 00:11:54.762 "progress": { 00:11:54.762 "blocks": 14336, 00:11:54.762 "percent": 22 00:11:54.762 } 00:11:54.762 }, 00:11:54.762 "base_bdevs_list": [ 00:11:54.762 { 00:11:54.762 "name": "spare", 00:11:54.762 "uuid": "b645bddf-84a0-5aab-8227-e304bd9ccc75", 00:11:54.762 "is_configured": true, 00:11:54.762 "data_offset": 2048, 00:11:54.762 "data_size": 63488 00:11:54.762 }, 00:11:54.762 { 00:11:54.762 "name": "BaseBdev2", 00:11:54.762 "uuid": "d67ad16a-3e90-5dfc-ba7e-6effb9861e98", 00:11:54.762 "is_configured": true, 00:11:54.762 "data_offset": 2048, 00:11:54.762 "data_size": 63488 00:11:54.762 } 00:11:54.762 ] 00:11:54.762 }' 00:11:54.762 23:29:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:55.021 23:29:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:55.021 23:29:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:55.021 23:29:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:55.021 23:29:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:11:55.021 [2024-09-30 23:29:34.791280] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:11:55.280 [2024-09-30 23:29:35.031167] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:11:55.539 121.50 IOPS, 364.50 MiB/s [2024-09-30 23:29:35.138396] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:11:55.799 [2024-09-30 23:29:35.469827] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:11:55.799 [2024-09-30 23:29:35.470611] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:11:56.058 23:29:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:11:56.058 23:29:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:56.058 23:29:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:56.058 23:29:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:56.058 23:29:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:56.058 23:29:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:56.058 23:29:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:56.058 23:29:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:56.058 23:29:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:56.058 23:29:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:56.058 [2024-09-30 23:29:35.693830] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:11:56.058 23:29:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:56.058 23:29:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:56.058 "name": "raid_bdev1", 00:11:56.058 "uuid": "71cbe803-7349-45a6-b388-9393f1660ff8", 00:11:56.058 "strip_size_kb": 0, 00:11:56.058 "state": "online", 00:11:56.058 "raid_level": "raid1", 00:11:56.058 "superblock": true, 00:11:56.058 "num_base_bdevs": 2, 00:11:56.058 "num_base_bdevs_discovered": 2, 00:11:56.058 "num_base_bdevs_operational": 2, 00:11:56.058 "process": { 00:11:56.058 "type": "rebuild", 00:11:56.058 "target": "spare", 00:11:56.059 "progress": { 00:11:56.059 "blocks": 26624, 00:11:56.059 "percent": 41 00:11:56.059 } 00:11:56.059 }, 00:11:56.059 "base_bdevs_list": [ 00:11:56.059 { 00:11:56.059 "name": "spare", 00:11:56.059 "uuid": "b645bddf-84a0-5aab-8227-e304bd9ccc75", 00:11:56.059 "is_configured": true, 00:11:56.059 "data_offset": 2048, 00:11:56.059 "data_size": 63488 00:11:56.059 }, 00:11:56.059 { 00:11:56.059 "name": "BaseBdev2", 00:11:56.059 "uuid": "d67ad16a-3e90-5dfc-ba7e-6effb9861e98", 00:11:56.059 "is_configured": true, 00:11:56.059 "data_offset": 2048, 00:11:56.059 "data_size": 63488 00:11:56.059 } 00:11:56.059 ] 00:11:56.059 }' 00:11:56.059 23:29:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:56.059 23:29:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:56.059 23:29:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:56.059 23:29:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:56.059 23:29:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:11:56.887 109.20 IOPS, 327.60 MiB/s [2024-09-30 23:29:36.567649] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:11:57.146 [2024-09-30 23:29:36.782715] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:11:57.146 23:29:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:11:57.146 23:29:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:57.146 23:29:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:57.146 23:29:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:57.146 23:29:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:57.146 23:29:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:57.146 23:29:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:57.146 23:29:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:57.146 23:29:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.146 23:29:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:57.146 23:29:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.146 23:29:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:57.146 "name": "raid_bdev1", 00:11:57.146 "uuid": "71cbe803-7349-45a6-b388-9393f1660ff8", 00:11:57.146 "strip_size_kb": 0, 00:11:57.146 "state": "online", 00:11:57.146 "raid_level": "raid1", 00:11:57.146 "superblock": true, 00:11:57.146 "num_base_bdevs": 2, 00:11:57.146 "num_base_bdevs_discovered": 2, 00:11:57.146 "num_base_bdevs_operational": 2, 00:11:57.146 "process": { 00:11:57.146 "type": "rebuild", 00:11:57.146 "target": "spare", 00:11:57.146 "progress": { 00:11:57.146 "blocks": 47104, 00:11:57.146 "percent": 74 00:11:57.146 } 00:11:57.146 }, 00:11:57.146 "base_bdevs_list": [ 00:11:57.146 { 00:11:57.146 "name": "spare", 00:11:57.146 "uuid": "b645bddf-84a0-5aab-8227-e304bd9ccc75", 00:11:57.146 "is_configured": true, 00:11:57.146 "data_offset": 2048, 00:11:57.146 "data_size": 63488 00:11:57.146 }, 00:11:57.146 { 00:11:57.146 "name": "BaseBdev2", 00:11:57.146 "uuid": "d67ad16a-3e90-5dfc-ba7e-6effb9861e98", 00:11:57.146 "is_configured": true, 00:11:57.146 "data_offset": 2048, 00:11:57.146 "data_size": 63488 00:11:57.146 } 00:11:57.146 ] 00:11:57.146 }' 00:11:57.146 23:29:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:57.146 23:29:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:57.146 23:29:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:57.146 23:29:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:57.146 23:29:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:11:57.664 97.50 IOPS, 292.50 MiB/s [2024-09-30 23:29:37.454977] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 59392 offset_begin: 55296 offset_end: 61440 00:11:57.923 [2024-09-30 23:29:37.676618] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:11:58.182 [2024-09-30 23:29:37.781711] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:11:58.182 [2024-09-30 23:29:37.784298] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:58.182 23:29:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:11:58.182 23:29:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:58.182 23:29:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:58.182 23:29:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:58.182 23:29:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:58.182 23:29:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:58.182 23:29:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:58.182 23:29:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:58.182 23:29:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:58.182 23:29:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:58.182 23:29:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:58.442 23:29:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:58.442 "name": "raid_bdev1", 00:11:58.442 "uuid": "71cbe803-7349-45a6-b388-9393f1660ff8", 00:11:58.442 "strip_size_kb": 0, 00:11:58.442 "state": "online", 00:11:58.442 "raid_level": "raid1", 00:11:58.442 "superblock": true, 00:11:58.442 "num_base_bdevs": 2, 00:11:58.442 "num_base_bdevs_discovered": 2, 00:11:58.442 "num_base_bdevs_operational": 2, 00:11:58.442 "base_bdevs_list": [ 00:11:58.442 { 00:11:58.442 "name": "spare", 00:11:58.442 "uuid": "b645bddf-84a0-5aab-8227-e304bd9ccc75", 00:11:58.442 "is_configured": true, 00:11:58.442 "data_offset": 2048, 00:11:58.442 "data_size": 63488 00:11:58.442 }, 00:11:58.442 { 00:11:58.442 "name": "BaseBdev2", 00:11:58.442 "uuid": "d67ad16a-3e90-5dfc-ba7e-6effb9861e98", 00:11:58.442 "is_configured": true, 00:11:58.442 "data_offset": 2048, 00:11:58.442 "data_size": 63488 00:11:58.442 } 00:11:58.442 ] 00:11:58.442 }' 00:11:58.442 23:29:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:58.442 23:29:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:11:58.442 23:29:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:58.442 88.00 IOPS, 264.00 MiB/s 23:29:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:11:58.442 23:29:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@709 -- # break 00:11:58.442 23:29:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:11:58.442 23:29:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:58.442 23:29:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:11:58.442 23:29:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:11:58.442 23:29:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:58.442 23:29:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:58.442 23:29:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:58.442 23:29:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:58.442 23:29:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:58.442 23:29:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:58.442 23:29:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:58.442 "name": "raid_bdev1", 00:11:58.442 "uuid": "71cbe803-7349-45a6-b388-9393f1660ff8", 00:11:58.442 "strip_size_kb": 0, 00:11:58.442 "state": "online", 00:11:58.442 "raid_level": "raid1", 00:11:58.442 "superblock": true, 00:11:58.442 "num_base_bdevs": 2, 00:11:58.442 "num_base_bdevs_discovered": 2, 00:11:58.442 "num_base_bdevs_operational": 2, 00:11:58.442 "base_bdevs_list": [ 00:11:58.442 { 00:11:58.442 "name": "spare", 00:11:58.442 "uuid": "b645bddf-84a0-5aab-8227-e304bd9ccc75", 00:11:58.442 "is_configured": true, 00:11:58.442 "data_offset": 2048, 00:11:58.442 "data_size": 63488 00:11:58.442 }, 00:11:58.442 { 00:11:58.442 "name": "BaseBdev2", 00:11:58.442 "uuid": "d67ad16a-3e90-5dfc-ba7e-6effb9861e98", 00:11:58.442 "is_configured": true, 00:11:58.442 "data_offset": 2048, 00:11:58.442 "data_size": 63488 00:11:58.442 } 00:11:58.442 ] 00:11:58.442 }' 00:11:58.442 23:29:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:58.442 23:29:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:11:58.442 23:29:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:58.442 23:29:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:11:58.442 23:29:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:11:58.442 23:29:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:58.442 23:29:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:58.442 23:29:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:58.442 23:29:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:58.442 23:29:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:58.442 23:29:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:58.442 23:29:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:58.442 23:29:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:58.442 23:29:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:58.442 23:29:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:58.442 23:29:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:58.442 23:29:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:58.442 23:29:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:58.442 23:29:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:58.702 23:29:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:58.702 "name": "raid_bdev1", 00:11:58.702 "uuid": "71cbe803-7349-45a6-b388-9393f1660ff8", 00:11:58.702 "strip_size_kb": 0, 00:11:58.702 "state": "online", 00:11:58.702 "raid_level": "raid1", 00:11:58.702 "superblock": true, 00:11:58.702 "num_base_bdevs": 2, 00:11:58.702 "num_base_bdevs_discovered": 2, 00:11:58.702 "num_base_bdevs_operational": 2, 00:11:58.702 "base_bdevs_list": [ 00:11:58.702 { 00:11:58.702 "name": "spare", 00:11:58.702 "uuid": "b645bddf-84a0-5aab-8227-e304bd9ccc75", 00:11:58.702 "is_configured": true, 00:11:58.702 "data_offset": 2048, 00:11:58.702 "data_size": 63488 00:11:58.702 }, 00:11:58.702 { 00:11:58.702 "name": "BaseBdev2", 00:11:58.702 "uuid": "d67ad16a-3e90-5dfc-ba7e-6effb9861e98", 00:11:58.702 "is_configured": true, 00:11:58.702 "data_offset": 2048, 00:11:58.702 "data_size": 63488 00:11:58.702 } 00:11:58.702 ] 00:11:58.702 }' 00:11:58.702 23:29:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:58.702 23:29:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:58.961 23:29:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:58.961 23:29:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:58.961 23:29:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:58.961 [2024-09-30 23:29:38.630022] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:58.961 [2024-09-30 23:29:38.630060] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:58.961 00:11:58.961 Latency(us) 00:11:58.961 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:58.961 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:11:58.961 raid_bdev1 : 7.56 85.29 255.86 0.00 0.00 15102.91 277.24 113099.68 00:11:58.961 =================================================================================================================== 00:11:58.961 Total : 85.29 255.86 0.00 0.00 15102.91 277.24 113099.68 00:11:58.961 [2024-09-30 23:29:38.653355] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:58.961 [2024-09-30 23:29:38.653436] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:58.961 [2024-09-30 23:29:38.653537] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:58.961 [2024-09-30 23:29:38.653592] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:11:58.961 { 00:11:58.961 "results": [ 00:11:58.961 { 00:11:58.961 "job": "raid_bdev1", 00:11:58.961 "core_mask": "0x1", 00:11:58.961 "workload": "randrw", 00:11:58.961 "percentage": 50, 00:11:58.961 "status": "finished", 00:11:58.961 "queue_depth": 2, 00:11:58.961 "io_size": 3145728, 00:11:58.961 "runtime": 7.562716, 00:11:58.961 "iops": 85.28682023759718, 00:11:58.961 "mibps": 255.86046071279156, 00:11:58.961 "io_failed": 0, 00:11:58.961 "io_timeout": 0, 00:11:58.961 "avg_latency_us": 15102.906360651297, 00:11:58.961 "min_latency_us": 277.2401746724891, 00:11:58.961 "max_latency_us": 113099.68209606987 00:11:58.961 } 00:11:58.961 ], 00:11:58.961 "core_count": 1 00:11:58.961 } 00:11:58.961 23:29:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:58.961 23:29:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:58.961 23:29:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:58.961 23:29:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:58.961 23:29:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # jq length 00:11:58.961 23:29:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:58.961 23:29:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:11:58.961 23:29:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:11:58.961 23:29:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:11:58.961 23:29:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:11:58.961 23:29:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:11:58.961 23:29:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:11:58.961 23:29:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:11:58.961 23:29:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:11:58.961 23:29:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:11:58.961 23:29:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:11:58.961 23:29:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:11:58.961 23:29:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:11:58.961 23:29:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:11:59.220 /dev/nbd0 00:11:59.220 23:29:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:11:59.220 23:29:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:11:59.220 23:29:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:11:59.220 23:29:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # local i 00:11:59.220 23:29:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:11:59.220 23:29:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:11:59.220 23:29:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:11:59.220 23:29:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # break 00:11:59.220 23:29:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:11:59.220 23:29:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:11:59.220 23:29:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:59.220 1+0 records in 00:11:59.220 1+0 records out 00:11:59.220 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000452712 s, 9.0 MB/s 00:11:59.220 23:29:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:59.220 23:29:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # size=4096 00:11:59.220 23:29:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:59.220 23:29:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:11:59.220 23:29:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # return 0 00:11:59.220 23:29:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:59.220 23:29:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:11:59.220 23:29:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:11:59.220 23:29:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev2 ']' 00:11:59.220 23:29:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev2 /dev/nbd1 00:11:59.220 23:29:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:11:59.220 23:29:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:11:59.220 23:29:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:11:59.220 23:29:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:11:59.220 23:29:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:11:59.221 23:29:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:11:59.221 23:29:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:11:59.221 23:29:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:11:59.221 23:29:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:11:59.480 /dev/nbd1 00:11:59.480 23:29:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:11:59.480 23:29:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:11:59.480 23:29:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:11:59.480 23:29:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # local i 00:11:59.480 23:29:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:11:59.480 23:29:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:11:59.480 23:29:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:11:59.480 23:29:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # break 00:11:59.480 23:29:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:11:59.480 23:29:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:11:59.480 23:29:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:59.480 1+0 records in 00:11:59.480 1+0 records out 00:11:59.480 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000425906 s, 9.6 MB/s 00:11:59.480 23:29:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:59.480 23:29:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # size=4096 00:11:59.480 23:29:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:59.480 23:29:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:11:59.480 23:29:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # return 0 00:11:59.480 23:29:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:59.480 23:29:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:11:59.480 23:29:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:11:59.739 23:29:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:11:59.739 23:29:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:11:59.739 23:29:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:11:59.739 23:29:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:11:59.739 23:29:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:11:59.739 23:29:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:59.739 23:29:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:11:59.739 23:29:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:11:59.739 23:29:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:11:59.739 23:29:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:11:59.739 23:29:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:59.739 23:29:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:59.739 23:29:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:11:59.739 23:29:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:11:59.739 23:29:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:11:59.739 23:29:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:11:59.739 23:29:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:11:59.739 23:29:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:11:59.739 23:29:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:11:59.739 23:29:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:11:59.739 23:29:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:59.739 23:29:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:11:59.998 23:29:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:11:59.998 23:29:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:11:59.998 23:29:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:11:59.998 23:29:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:59.998 23:29:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:59.998 23:29:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:11:59.998 23:29:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:11:59.998 23:29:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:11:59.998 23:29:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:11:59.998 23:29:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:11:59.998 23:29:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.998 23:29:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:59.998 23:29:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.998 23:29:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:11:59.998 23:29:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.998 23:29:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:59.998 [2024-09-30 23:29:39.824273] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:11:59.998 [2024-09-30 23:29:39.824390] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:59.998 [2024-09-30 23:29:39.824438] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:11:59.998 [2024-09-30 23:29:39.824483] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:59.998 [2024-09-30 23:29:39.826979] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:59.998 [2024-09-30 23:29:39.827052] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:11:59.998 [2024-09-30 23:29:39.827169] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:11:59.998 [2024-09-30 23:29:39.827283] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:11:59.998 [2024-09-30 23:29:39.827450] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:59.998 spare 00:11:59.998 23:29:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.998 23:29:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:11:59.998 23:29:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.998 23:29:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:00.257 [2024-09-30 23:29:39.927399] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006600 00:12:00.257 [2024-09-30 23:29:39.927466] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:00.257 [2024-09-30 23:29:39.927767] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002af30 00:12:00.257 [2024-09-30 23:29:39.927965] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006600 00:12:00.257 [2024-09-30 23:29:39.928014] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006600 00:12:00.258 [2024-09-30 23:29:39.928202] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:00.258 23:29:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.258 23:29:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:00.258 23:29:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:00.258 23:29:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:00.258 23:29:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:00.258 23:29:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:00.258 23:29:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:00.258 23:29:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:00.258 23:29:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:00.258 23:29:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:00.258 23:29:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:00.258 23:29:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:00.258 23:29:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:00.258 23:29:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.258 23:29:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:00.258 23:29:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.258 23:29:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:00.258 "name": "raid_bdev1", 00:12:00.258 "uuid": "71cbe803-7349-45a6-b388-9393f1660ff8", 00:12:00.258 "strip_size_kb": 0, 00:12:00.258 "state": "online", 00:12:00.258 "raid_level": "raid1", 00:12:00.258 "superblock": true, 00:12:00.258 "num_base_bdevs": 2, 00:12:00.258 "num_base_bdevs_discovered": 2, 00:12:00.258 "num_base_bdevs_operational": 2, 00:12:00.258 "base_bdevs_list": [ 00:12:00.258 { 00:12:00.258 "name": "spare", 00:12:00.258 "uuid": "b645bddf-84a0-5aab-8227-e304bd9ccc75", 00:12:00.258 "is_configured": true, 00:12:00.258 "data_offset": 2048, 00:12:00.258 "data_size": 63488 00:12:00.258 }, 00:12:00.258 { 00:12:00.258 "name": "BaseBdev2", 00:12:00.258 "uuid": "d67ad16a-3e90-5dfc-ba7e-6effb9861e98", 00:12:00.258 "is_configured": true, 00:12:00.258 "data_offset": 2048, 00:12:00.258 "data_size": 63488 00:12:00.258 } 00:12:00.258 ] 00:12:00.258 }' 00:12:00.258 23:29:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:00.258 23:29:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:00.826 23:29:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:00.826 23:29:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:00.826 23:29:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:00.826 23:29:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:00.826 23:29:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:00.826 23:29:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:00.826 23:29:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:00.826 23:29:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.826 23:29:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:00.826 23:29:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.826 23:29:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:00.826 "name": "raid_bdev1", 00:12:00.826 "uuid": "71cbe803-7349-45a6-b388-9393f1660ff8", 00:12:00.826 "strip_size_kb": 0, 00:12:00.826 "state": "online", 00:12:00.826 "raid_level": "raid1", 00:12:00.826 "superblock": true, 00:12:00.826 "num_base_bdevs": 2, 00:12:00.826 "num_base_bdevs_discovered": 2, 00:12:00.826 "num_base_bdevs_operational": 2, 00:12:00.826 "base_bdevs_list": [ 00:12:00.826 { 00:12:00.826 "name": "spare", 00:12:00.826 "uuid": "b645bddf-84a0-5aab-8227-e304bd9ccc75", 00:12:00.826 "is_configured": true, 00:12:00.826 "data_offset": 2048, 00:12:00.826 "data_size": 63488 00:12:00.826 }, 00:12:00.826 { 00:12:00.826 "name": "BaseBdev2", 00:12:00.826 "uuid": "d67ad16a-3e90-5dfc-ba7e-6effb9861e98", 00:12:00.826 "is_configured": true, 00:12:00.826 "data_offset": 2048, 00:12:00.826 "data_size": 63488 00:12:00.826 } 00:12:00.826 ] 00:12:00.826 }' 00:12:00.826 23:29:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:00.826 23:29:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:00.826 23:29:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:00.826 23:29:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:00.826 23:29:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:00.826 23:29:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:12:00.826 23:29:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.826 23:29:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:00.826 23:29:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.826 23:29:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:12:00.826 23:29:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:12:00.826 23:29:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.826 23:29:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:00.826 [2024-09-30 23:29:40.587287] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:00.826 23:29:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.827 23:29:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:00.827 23:29:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:00.827 23:29:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:00.827 23:29:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:00.827 23:29:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:00.827 23:29:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:00.827 23:29:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:00.827 23:29:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:00.827 23:29:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:00.827 23:29:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:00.827 23:29:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:00.827 23:29:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:00.827 23:29:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.827 23:29:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:00.827 23:29:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.827 23:29:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:00.827 "name": "raid_bdev1", 00:12:00.827 "uuid": "71cbe803-7349-45a6-b388-9393f1660ff8", 00:12:00.827 "strip_size_kb": 0, 00:12:00.827 "state": "online", 00:12:00.827 "raid_level": "raid1", 00:12:00.827 "superblock": true, 00:12:00.827 "num_base_bdevs": 2, 00:12:00.827 "num_base_bdevs_discovered": 1, 00:12:00.827 "num_base_bdevs_operational": 1, 00:12:00.827 "base_bdevs_list": [ 00:12:00.827 { 00:12:00.827 "name": null, 00:12:00.827 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:00.827 "is_configured": false, 00:12:00.827 "data_offset": 0, 00:12:00.827 "data_size": 63488 00:12:00.827 }, 00:12:00.827 { 00:12:00.827 "name": "BaseBdev2", 00:12:00.827 "uuid": "d67ad16a-3e90-5dfc-ba7e-6effb9861e98", 00:12:00.827 "is_configured": true, 00:12:00.827 "data_offset": 2048, 00:12:00.827 "data_size": 63488 00:12:00.827 } 00:12:00.827 ] 00:12:00.827 }' 00:12:00.827 23:29:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:00.827 23:29:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:01.395 23:29:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:01.395 23:29:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.395 23:29:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:01.395 [2024-09-30 23:29:41.090978] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:01.395 [2024-09-30 23:29:41.091220] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:12:01.395 [2024-09-30 23:29:41.091296] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:12:01.395 [2024-09-30 23:29:41.091369] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:01.395 [2024-09-30 23:29:41.099162] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b000 00:12:01.395 23:29:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.395 23:29:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@757 -- # sleep 1 00:12:01.395 [2024-09-30 23:29:41.101308] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:02.334 23:29:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:02.334 23:29:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:02.334 23:29:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:02.334 23:29:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:02.334 23:29:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:02.334 23:29:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:02.334 23:29:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:02.334 23:29:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:02.334 23:29:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:02.334 23:29:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:02.334 23:29:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:02.334 "name": "raid_bdev1", 00:12:02.334 "uuid": "71cbe803-7349-45a6-b388-9393f1660ff8", 00:12:02.334 "strip_size_kb": 0, 00:12:02.334 "state": "online", 00:12:02.334 "raid_level": "raid1", 00:12:02.334 "superblock": true, 00:12:02.334 "num_base_bdevs": 2, 00:12:02.334 "num_base_bdevs_discovered": 2, 00:12:02.334 "num_base_bdevs_operational": 2, 00:12:02.334 "process": { 00:12:02.334 "type": "rebuild", 00:12:02.334 "target": "spare", 00:12:02.334 "progress": { 00:12:02.334 "blocks": 20480, 00:12:02.334 "percent": 32 00:12:02.334 } 00:12:02.334 }, 00:12:02.334 "base_bdevs_list": [ 00:12:02.334 { 00:12:02.334 "name": "spare", 00:12:02.334 "uuid": "b645bddf-84a0-5aab-8227-e304bd9ccc75", 00:12:02.334 "is_configured": true, 00:12:02.334 "data_offset": 2048, 00:12:02.334 "data_size": 63488 00:12:02.334 }, 00:12:02.334 { 00:12:02.334 "name": "BaseBdev2", 00:12:02.334 "uuid": "d67ad16a-3e90-5dfc-ba7e-6effb9861e98", 00:12:02.334 "is_configured": true, 00:12:02.334 "data_offset": 2048, 00:12:02.334 "data_size": 63488 00:12:02.334 } 00:12:02.334 ] 00:12:02.334 }' 00:12:02.334 23:29:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:02.594 23:29:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:02.594 23:29:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:02.594 23:29:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:02.594 23:29:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:12:02.594 23:29:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:02.594 23:29:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:02.594 [2024-09-30 23:29:42.261263] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:02.594 [2024-09-30 23:29:42.308827] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:12:02.594 [2024-09-30 23:29:42.308945] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:02.594 [2024-09-30 23:29:42.308966] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:02.594 [2024-09-30 23:29:42.308975] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:12:02.594 23:29:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:02.594 23:29:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:02.594 23:29:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:02.594 23:29:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:02.594 23:29:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:02.594 23:29:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:02.594 23:29:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:02.594 23:29:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:02.594 23:29:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:02.594 23:29:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:02.594 23:29:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:02.594 23:29:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:02.594 23:29:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:02.594 23:29:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:02.594 23:29:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:02.594 23:29:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:02.594 23:29:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:02.594 "name": "raid_bdev1", 00:12:02.594 "uuid": "71cbe803-7349-45a6-b388-9393f1660ff8", 00:12:02.594 "strip_size_kb": 0, 00:12:02.594 "state": "online", 00:12:02.594 "raid_level": "raid1", 00:12:02.594 "superblock": true, 00:12:02.594 "num_base_bdevs": 2, 00:12:02.594 "num_base_bdevs_discovered": 1, 00:12:02.594 "num_base_bdevs_operational": 1, 00:12:02.594 "base_bdevs_list": [ 00:12:02.594 { 00:12:02.594 "name": null, 00:12:02.594 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:02.594 "is_configured": false, 00:12:02.594 "data_offset": 0, 00:12:02.594 "data_size": 63488 00:12:02.594 }, 00:12:02.594 { 00:12:02.594 "name": "BaseBdev2", 00:12:02.594 "uuid": "d67ad16a-3e90-5dfc-ba7e-6effb9861e98", 00:12:02.594 "is_configured": true, 00:12:02.594 "data_offset": 2048, 00:12:02.594 "data_size": 63488 00:12:02.594 } 00:12:02.594 ] 00:12:02.594 }' 00:12:02.594 23:29:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:02.594 23:29:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:03.163 23:29:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:12:03.163 23:29:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:03.163 23:29:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:03.163 [2024-09-30 23:29:42.791793] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:12:03.163 [2024-09-30 23:29:42.791922] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:03.163 [2024-09-30 23:29:42.791973] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:12:03.163 [2024-09-30 23:29:42.792003] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:03.163 [2024-09-30 23:29:42.792527] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:03.163 [2024-09-30 23:29:42.792586] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:12:03.163 [2024-09-30 23:29:42.792736] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:12:03.163 [2024-09-30 23:29:42.792774] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:12:03.163 [2024-09-30 23:29:42.792822] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:12:03.163 [2024-09-30 23:29:42.792878] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:03.163 [2024-09-30 23:29:42.800826] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b0d0 00:12:03.163 spare 00:12:03.163 23:29:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:03.163 23:29:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@764 -- # sleep 1 00:12:03.163 [2024-09-30 23:29:42.803048] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:04.179 23:29:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:04.179 23:29:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:04.179 23:29:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:04.180 23:29:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:04.180 23:29:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:04.180 23:29:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:04.180 23:29:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:04.180 23:29:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:04.180 23:29:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:04.180 23:29:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:04.180 23:29:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:04.180 "name": "raid_bdev1", 00:12:04.180 "uuid": "71cbe803-7349-45a6-b388-9393f1660ff8", 00:12:04.180 "strip_size_kb": 0, 00:12:04.180 "state": "online", 00:12:04.180 "raid_level": "raid1", 00:12:04.180 "superblock": true, 00:12:04.180 "num_base_bdevs": 2, 00:12:04.180 "num_base_bdevs_discovered": 2, 00:12:04.180 "num_base_bdevs_operational": 2, 00:12:04.180 "process": { 00:12:04.180 "type": "rebuild", 00:12:04.180 "target": "spare", 00:12:04.180 "progress": { 00:12:04.180 "blocks": 20480, 00:12:04.180 "percent": 32 00:12:04.180 } 00:12:04.180 }, 00:12:04.180 "base_bdevs_list": [ 00:12:04.180 { 00:12:04.180 "name": "spare", 00:12:04.180 "uuid": "b645bddf-84a0-5aab-8227-e304bd9ccc75", 00:12:04.180 "is_configured": true, 00:12:04.180 "data_offset": 2048, 00:12:04.180 "data_size": 63488 00:12:04.180 }, 00:12:04.180 { 00:12:04.180 "name": "BaseBdev2", 00:12:04.180 "uuid": "d67ad16a-3e90-5dfc-ba7e-6effb9861e98", 00:12:04.180 "is_configured": true, 00:12:04.180 "data_offset": 2048, 00:12:04.180 "data_size": 63488 00:12:04.180 } 00:12:04.180 ] 00:12:04.180 }' 00:12:04.180 23:29:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:04.180 23:29:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:04.180 23:29:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:04.180 23:29:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:04.180 23:29:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:12:04.180 23:29:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:04.180 23:29:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:04.180 [2024-09-30 23:29:43.947425] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:04.180 [2024-09-30 23:29:44.010970] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:12:04.180 [2024-09-30 23:29:44.011035] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:04.180 [2024-09-30 23:29:44.011051] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:04.180 [2024-09-30 23:29:44.011061] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:12:04.180 23:29:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:04.180 23:29:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:04.180 23:29:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:04.180 23:29:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:04.180 23:29:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:04.180 23:29:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:04.180 23:29:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:04.180 23:29:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:04.180 23:29:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:04.180 23:29:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:04.180 23:29:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:04.180 23:29:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:04.180 23:29:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:04.180 23:29:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:04.180 23:29:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:04.439 23:29:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:04.439 23:29:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:04.439 "name": "raid_bdev1", 00:12:04.439 "uuid": "71cbe803-7349-45a6-b388-9393f1660ff8", 00:12:04.439 "strip_size_kb": 0, 00:12:04.439 "state": "online", 00:12:04.439 "raid_level": "raid1", 00:12:04.439 "superblock": true, 00:12:04.439 "num_base_bdevs": 2, 00:12:04.439 "num_base_bdevs_discovered": 1, 00:12:04.439 "num_base_bdevs_operational": 1, 00:12:04.439 "base_bdevs_list": [ 00:12:04.439 { 00:12:04.439 "name": null, 00:12:04.439 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:04.439 "is_configured": false, 00:12:04.439 "data_offset": 0, 00:12:04.439 "data_size": 63488 00:12:04.439 }, 00:12:04.439 { 00:12:04.439 "name": "BaseBdev2", 00:12:04.439 "uuid": "d67ad16a-3e90-5dfc-ba7e-6effb9861e98", 00:12:04.439 "is_configured": true, 00:12:04.439 "data_offset": 2048, 00:12:04.439 "data_size": 63488 00:12:04.439 } 00:12:04.439 ] 00:12:04.439 }' 00:12:04.439 23:29:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:04.439 23:29:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:04.698 23:29:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:04.699 23:29:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:04.699 23:29:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:04.699 23:29:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:04.699 23:29:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:04.699 23:29:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:04.699 23:29:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:04.699 23:29:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:04.699 23:29:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:04.699 23:29:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:04.699 23:29:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:04.699 "name": "raid_bdev1", 00:12:04.699 "uuid": "71cbe803-7349-45a6-b388-9393f1660ff8", 00:12:04.699 "strip_size_kb": 0, 00:12:04.699 "state": "online", 00:12:04.699 "raid_level": "raid1", 00:12:04.699 "superblock": true, 00:12:04.699 "num_base_bdevs": 2, 00:12:04.699 "num_base_bdevs_discovered": 1, 00:12:04.699 "num_base_bdevs_operational": 1, 00:12:04.699 "base_bdevs_list": [ 00:12:04.699 { 00:12:04.699 "name": null, 00:12:04.699 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:04.699 "is_configured": false, 00:12:04.699 "data_offset": 0, 00:12:04.699 "data_size": 63488 00:12:04.699 }, 00:12:04.699 { 00:12:04.699 "name": "BaseBdev2", 00:12:04.699 "uuid": "d67ad16a-3e90-5dfc-ba7e-6effb9861e98", 00:12:04.699 "is_configured": true, 00:12:04.699 "data_offset": 2048, 00:12:04.699 "data_size": 63488 00:12:04.699 } 00:12:04.699 ] 00:12:04.699 }' 00:12:04.699 23:29:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:04.699 23:29:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:04.699 23:29:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:04.958 23:29:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:04.958 23:29:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:12:04.958 23:29:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:04.958 23:29:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:04.958 23:29:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:04.958 23:29:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:12:04.958 23:29:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:04.958 23:29:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:04.958 [2024-09-30 23:29:44.565783] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:12:04.958 [2024-09-30 23:29:44.565838] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:04.958 [2024-09-30 23:29:44.565868] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:12:04.958 [2024-09-30 23:29:44.565881] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:04.958 [2024-09-30 23:29:44.566332] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:04.958 [2024-09-30 23:29:44.566352] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:04.958 [2024-09-30 23:29:44.566428] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:12:04.958 [2024-09-30 23:29:44.566447] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:12:04.958 [2024-09-30 23:29:44.566455] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:12:04.958 [2024-09-30 23:29:44.566467] bdev_raid.c:3884:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:12:04.958 BaseBdev1 00:12:04.958 23:29:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:04.958 23:29:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@775 -- # sleep 1 00:12:05.897 23:29:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:05.897 23:29:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:05.897 23:29:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:05.897 23:29:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:05.897 23:29:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:05.897 23:29:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:05.897 23:29:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:05.897 23:29:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:05.897 23:29:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:05.897 23:29:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:05.897 23:29:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:05.897 23:29:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:05.897 23:29:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.897 23:29:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:05.897 23:29:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.897 23:29:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:05.897 "name": "raid_bdev1", 00:12:05.897 "uuid": "71cbe803-7349-45a6-b388-9393f1660ff8", 00:12:05.897 "strip_size_kb": 0, 00:12:05.897 "state": "online", 00:12:05.897 "raid_level": "raid1", 00:12:05.897 "superblock": true, 00:12:05.897 "num_base_bdevs": 2, 00:12:05.897 "num_base_bdevs_discovered": 1, 00:12:05.897 "num_base_bdevs_operational": 1, 00:12:05.897 "base_bdevs_list": [ 00:12:05.897 { 00:12:05.897 "name": null, 00:12:05.897 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:05.897 "is_configured": false, 00:12:05.897 "data_offset": 0, 00:12:05.897 "data_size": 63488 00:12:05.897 }, 00:12:05.897 { 00:12:05.897 "name": "BaseBdev2", 00:12:05.897 "uuid": "d67ad16a-3e90-5dfc-ba7e-6effb9861e98", 00:12:05.897 "is_configured": true, 00:12:05.897 "data_offset": 2048, 00:12:05.897 "data_size": 63488 00:12:05.897 } 00:12:05.897 ] 00:12:05.897 }' 00:12:05.897 23:29:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:05.897 23:29:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:06.465 23:29:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:06.465 23:29:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:06.465 23:29:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:06.465 23:29:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:06.465 23:29:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:06.465 23:29:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:06.465 23:29:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:06.465 23:29:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:06.465 23:29:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:06.465 23:29:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:06.465 23:29:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:06.465 "name": "raid_bdev1", 00:12:06.465 "uuid": "71cbe803-7349-45a6-b388-9393f1660ff8", 00:12:06.465 "strip_size_kb": 0, 00:12:06.465 "state": "online", 00:12:06.465 "raid_level": "raid1", 00:12:06.465 "superblock": true, 00:12:06.465 "num_base_bdevs": 2, 00:12:06.465 "num_base_bdevs_discovered": 1, 00:12:06.465 "num_base_bdevs_operational": 1, 00:12:06.465 "base_bdevs_list": [ 00:12:06.465 { 00:12:06.465 "name": null, 00:12:06.465 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:06.465 "is_configured": false, 00:12:06.465 "data_offset": 0, 00:12:06.465 "data_size": 63488 00:12:06.465 }, 00:12:06.465 { 00:12:06.465 "name": "BaseBdev2", 00:12:06.465 "uuid": "d67ad16a-3e90-5dfc-ba7e-6effb9861e98", 00:12:06.465 "is_configured": true, 00:12:06.465 "data_offset": 2048, 00:12:06.465 "data_size": 63488 00:12:06.465 } 00:12:06.465 ] 00:12:06.465 }' 00:12:06.465 23:29:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:06.465 23:29:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:06.465 23:29:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:06.465 23:29:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:06.465 23:29:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:12:06.465 23:29:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@650 -- # local es=0 00:12:06.465 23:29:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:12:06.465 23:29:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:12:06.465 23:29:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:06.465 23:29:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:12:06.465 23:29:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:06.465 23:29:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:12:06.465 23:29:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:06.465 23:29:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:06.465 [2024-09-30 23:29:46.175392] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:06.465 [2024-09-30 23:29:46.175576] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:12:06.465 [2024-09-30 23:29:46.175594] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:12:06.465 request: 00:12:06.465 { 00:12:06.465 "base_bdev": "BaseBdev1", 00:12:06.465 "raid_bdev": "raid_bdev1", 00:12:06.465 "method": "bdev_raid_add_base_bdev", 00:12:06.465 "req_id": 1 00:12:06.465 } 00:12:06.465 Got JSON-RPC error response 00:12:06.465 response: 00:12:06.465 { 00:12:06.465 "code": -22, 00:12:06.465 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:12:06.465 } 00:12:06.465 23:29:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:12:06.465 23:29:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@653 -- # es=1 00:12:06.465 23:29:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:06.465 23:29:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:06.465 23:29:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:06.466 23:29:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@779 -- # sleep 1 00:12:07.404 23:29:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:07.404 23:29:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:07.404 23:29:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:07.404 23:29:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:07.404 23:29:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:07.404 23:29:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:07.404 23:29:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:07.404 23:29:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:07.404 23:29:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:07.404 23:29:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:07.404 23:29:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:07.404 23:29:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.404 23:29:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:07.404 23:29:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:07.404 23:29:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.404 23:29:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:07.404 "name": "raid_bdev1", 00:12:07.404 "uuid": "71cbe803-7349-45a6-b388-9393f1660ff8", 00:12:07.404 "strip_size_kb": 0, 00:12:07.404 "state": "online", 00:12:07.404 "raid_level": "raid1", 00:12:07.404 "superblock": true, 00:12:07.404 "num_base_bdevs": 2, 00:12:07.404 "num_base_bdevs_discovered": 1, 00:12:07.404 "num_base_bdevs_operational": 1, 00:12:07.404 "base_bdevs_list": [ 00:12:07.404 { 00:12:07.404 "name": null, 00:12:07.404 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:07.404 "is_configured": false, 00:12:07.404 "data_offset": 0, 00:12:07.404 "data_size": 63488 00:12:07.404 }, 00:12:07.404 { 00:12:07.404 "name": "BaseBdev2", 00:12:07.404 "uuid": "d67ad16a-3e90-5dfc-ba7e-6effb9861e98", 00:12:07.404 "is_configured": true, 00:12:07.404 "data_offset": 2048, 00:12:07.404 "data_size": 63488 00:12:07.404 } 00:12:07.404 ] 00:12:07.404 }' 00:12:07.404 23:29:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:07.404 23:29:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:07.973 23:29:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:07.973 23:29:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:07.973 23:29:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:07.973 23:29:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:07.973 23:29:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:07.973 23:29:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:07.973 23:29:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.973 23:29:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:07.973 23:29:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:07.973 23:29:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.973 23:29:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:07.973 "name": "raid_bdev1", 00:12:07.973 "uuid": "71cbe803-7349-45a6-b388-9393f1660ff8", 00:12:07.973 "strip_size_kb": 0, 00:12:07.973 "state": "online", 00:12:07.973 "raid_level": "raid1", 00:12:07.973 "superblock": true, 00:12:07.973 "num_base_bdevs": 2, 00:12:07.973 "num_base_bdevs_discovered": 1, 00:12:07.973 "num_base_bdevs_operational": 1, 00:12:07.973 "base_bdevs_list": [ 00:12:07.973 { 00:12:07.973 "name": null, 00:12:07.973 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:07.973 "is_configured": false, 00:12:07.973 "data_offset": 0, 00:12:07.973 "data_size": 63488 00:12:07.973 }, 00:12:07.973 { 00:12:07.973 "name": "BaseBdev2", 00:12:07.973 "uuid": "d67ad16a-3e90-5dfc-ba7e-6effb9861e98", 00:12:07.973 "is_configured": true, 00:12:07.973 "data_offset": 2048, 00:12:07.973 "data_size": 63488 00:12:07.973 } 00:12:07.973 ] 00:12:07.973 }' 00:12:07.973 23:29:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:07.973 23:29:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:07.973 23:29:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:07.973 23:29:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:07.973 23:29:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@784 -- # killprocess 87548 00:12:07.973 23:29:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@950 -- # '[' -z 87548 ']' 00:12:07.973 23:29:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@954 -- # kill -0 87548 00:12:07.973 23:29:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@955 -- # uname 00:12:07.973 23:29:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:07.973 23:29:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 87548 00:12:08.233 23:29:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:08.233 23:29:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:08.233 23:29:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@968 -- # echo 'killing process with pid 87548' 00:12:08.233 killing process with pid 87548 00:12:08.233 23:29:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@969 -- # kill 87548 00:12:08.233 Received shutdown signal, test time was about 16.786862 seconds 00:12:08.233 00:12:08.233 Latency(us) 00:12:08.233 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:08.233 =================================================================================================================== 00:12:08.233 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:12:08.233 [2024-09-30 23:29:47.856299] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:08.233 [2024-09-30 23:29:47.856441] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:08.233 23:29:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@974 -- # wait 87548 00:12:08.233 [2024-09-30 23:29:47.856509] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:08.233 [2024-09-30 23:29:47.856518] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state offline 00:12:08.233 [2024-09-30 23:29:47.902352] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:08.493 23:29:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@786 -- # return 0 00:12:08.493 00:12:08.493 real 0m18.840s 00:12:08.493 user 0m24.885s 00:12:08.493 sys 0m2.325s 00:12:08.493 23:29:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:08.493 23:29:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:08.493 ************************************ 00:12:08.493 END TEST raid_rebuild_test_sb_io 00:12:08.493 ************************************ 00:12:08.493 23:29:48 bdev_raid -- bdev/bdev_raid.sh@977 -- # for n in 2 4 00:12:08.493 23:29:48 bdev_raid -- bdev/bdev_raid.sh@978 -- # run_test raid_rebuild_test raid_rebuild_test raid1 4 false false true 00:12:08.493 23:29:48 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:12:08.493 23:29:48 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:08.493 23:29:48 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:08.753 ************************************ 00:12:08.753 START TEST raid_rebuild_test 00:12:08.753 ************************************ 00:12:08.753 23:29:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 4 false false true 00:12:08.753 23:29:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:12:08.753 23:29:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:12:08.753 23:29:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:12:08.753 23:29:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:12:08.753 23:29:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:12:08.753 23:29:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:12:08.753 23:29:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:08.753 23:29:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:12:08.753 23:29:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:08.753 23:29:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:08.753 23:29:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:12:08.753 23:29:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:08.753 23:29:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:08.753 23:29:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:12:08.753 23:29:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:08.753 23:29:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:08.753 23:29:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:12:08.753 23:29:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:08.753 23:29:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:08.753 23:29:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:12:08.753 23:29:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:12:08.753 23:29:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:12:08.753 23:29:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:12:08.753 23:29:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:12:08.753 23:29:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:12:08.753 23:29:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:12:08.753 23:29:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:12:08.753 23:29:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:12:08.753 23:29:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:12:08.753 23:29:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=88229 00:12:08.753 23:29:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:12:08.753 23:29:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 88229 00:12:08.753 23:29:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@831 -- # '[' -z 88229 ']' 00:12:08.753 23:29:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:08.753 23:29:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:08.753 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:08.753 23:29:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:08.753 23:29:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:08.753 23:29:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.753 I/O size of 3145728 is greater than zero copy threshold (65536). 00:12:08.753 Zero copy mechanism will not be used. 00:12:08.753 [2024-09-30 23:29:48.452008] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:12:08.753 [2024-09-30 23:29:48.452151] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88229 ] 00:12:09.013 [2024-09-30 23:29:48.616043] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:09.013 [2024-09-30 23:29:48.683426] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:12:09.013 [2024-09-30 23:29:48.759057] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:09.013 [2024-09-30 23:29:48.759096] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:09.582 23:29:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:09.582 23:29:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@864 -- # return 0 00:12:09.582 23:29:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:09.582 23:29:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:09.582 23:29:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:09.582 23:29:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.582 BaseBdev1_malloc 00:12:09.582 23:29:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:09.582 23:29:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:12:09.582 23:29:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:09.582 23:29:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.582 [2024-09-30 23:29:49.297556] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:12:09.582 [2024-09-30 23:29:49.297628] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:09.582 [2024-09-30 23:29:49.297664] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:12:09.582 [2024-09-30 23:29:49.297680] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:09.582 [2024-09-30 23:29:49.300175] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:09.582 [2024-09-30 23:29:49.300207] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:09.582 BaseBdev1 00:12:09.582 23:29:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:09.582 23:29:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:09.582 23:29:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:09.582 23:29:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:09.582 23:29:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.582 BaseBdev2_malloc 00:12:09.582 23:29:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:09.582 23:29:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:12:09.582 23:29:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:09.582 23:29:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.582 [2024-09-30 23:29:49.348131] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:12:09.582 [2024-09-30 23:29:49.348185] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:09.582 [2024-09-30 23:29:49.348210] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:12:09.582 [2024-09-30 23:29:49.348219] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:09.582 [2024-09-30 23:29:49.350696] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:09.582 [2024-09-30 23:29:49.350728] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:09.582 BaseBdev2 00:12:09.583 23:29:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:09.583 23:29:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:09.583 23:29:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:12:09.583 23:29:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:09.583 23:29:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.583 BaseBdev3_malloc 00:12:09.583 23:29:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:09.583 23:29:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:12:09.583 23:29:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:09.583 23:29:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.583 [2024-09-30 23:29:49.382559] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:12:09.583 [2024-09-30 23:29:49.382605] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:09.583 [2024-09-30 23:29:49.382632] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:12:09.583 [2024-09-30 23:29:49.382641] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:09.583 [2024-09-30 23:29:49.384985] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:09.583 [2024-09-30 23:29:49.385015] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:12:09.583 BaseBdev3 00:12:09.583 23:29:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:09.583 23:29:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:09.583 23:29:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:12:09.583 23:29:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:09.583 23:29:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.583 BaseBdev4_malloc 00:12:09.583 23:29:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:09.583 23:29:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:12:09.583 23:29:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:09.583 23:29:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.583 [2024-09-30 23:29:49.417204] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:12:09.583 [2024-09-30 23:29:49.417256] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:09.583 [2024-09-30 23:29:49.417283] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:12:09.583 [2024-09-30 23:29:49.417292] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:09.583 [2024-09-30 23:29:49.419647] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:09.583 [2024-09-30 23:29:49.419677] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:12:09.583 BaseBdev4 00:12:09.583 23:29:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:09.583 23:29:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:12:09.583 23:29:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:09.583 23:29:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.843 spare_malloc 00:12:09.843 23:29:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:09.843 23:29:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:12:09.843 23:29:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:09.843 23:29:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.843 spare_delay 00:12:09.843 23:29:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:09.843 23:29:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:12:09.843 23:29:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:09.843 23:29:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.843 [2024-09-30 23:29:49.463763] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:12:09.843 [2024-09-30 23:29:49.463813] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:09.843 [2024-09-30 23:29:49.463835] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:12:09.843 [2024-09-30 23:29:49.463845] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:09.843 [2024-09-30 23:29:49.466229] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:09.843 [2024-09-30 23:29:49.466261] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:12:09.843 spare 00:12:09.843 23:29:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:09.843 23:29:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:12:09.843 23:29:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:09.843 23:29:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.843 [2024-09-30 23:29:49.475809] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:09.843 [2024-09-30 23:29:49.477858] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:09.843 [2024-09-30 23:29:49.477939] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:09.843 [2024-09-30 23:29:49.477982] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:09.843 [2024-09-30 23:29:49.478057] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:12:09.843 [2024-09-30 23:29:49.478076] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:12:09.843 [2024-09-30 23:29:49.478352] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:12:09.843 [2024-09-30 23:29:49.478492] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:12:09.843 [2024-09-30 23:29:49.478515] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:12:09.843 [2024-09-30 23:29:49.478651] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:09.843 23:29:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:09.843 23:29:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:12:09.843 23:29:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:09.843 23:29:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:09.843 23:29:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:09.843 23:29:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:09.843 23:29:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:09.843 23:29:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:09.843 23:29:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:09.843 23:29:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:09.843 23:29:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:09.843 23:29:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:09.843 23:29:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:09.843 23:29:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:09.843 23:29:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.843 23:29:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:09.843 23:29:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:09.843 "name": "raid_bdev1", 00:12:09.843 "uuid": "6ff5a3eb-3d7c-4fbf-921e-33718bdf4451", 00:12:09.843 "strip_size_kb": 0, 00:12:09.843 "state": "online", 00:12:09.843 "raid_level": "raid1", 00:12:09.843 "superblock": false, 00:12:09.843 "num_base_bdevs": 4, 00:12:09.843 "num_base_bdevs_discovered": 4, 00:12:09.843 "num_base_bdevs_operational": 4, 00:12:09.843 "base_bdevs_list": [ 00:12:09.843 { 00:12:09.843 "name": "BaseBdev1", 00:12:09.843 "uuid": "bd2bf1d1-1334-5993-ab13-98028060f4ba", 00:12:09.843 "is_configured": true, 00:12:09.843 "data_offset": 0, 00:12:09.843 "data_size": 65536 00:12:09.843 }, 00:12:09.843 { 00:12:09.843 "name": "BaseBdev2", 00:12:09.843 "uuid": "be1aecfa-bf3b-5bb2-8fd4-3b73da0613f6", 00:12:09.843 "is_configured": true, 00:12:09.843 "data_offset": 0, 00:12:09.843 "data_size": 65536 00:12:09.843 }, 00:12:09.843 { 00:12:09.843 "name": "BaseBdev3", 00:12:09.843 "uuid": "e499c259-e73a-52af-b993-a0d84622d626", 00:12:09.843 "is_configured": true, 00:12:09.843 "data_offset": 0, 00:12:09.843 "data_size": 65536 00:12:09.843 }, 00:12:09.843 { 00:12:09.843 "name": "BaseBdev4", 00:12:09.843 "uuid": "57418f18-4644-5edc-a1e2-c16e8f2a3131", 00:12:09.843 "is_configured": true, 00:12:09.843 "data_offset": 0, 00:12:09.843 "data_size": 65536 00:12:09.843 } 00:12:09.843 ] 00:12:09.843 }' 00:12:09.843 23:29:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:09.843 23:29:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.102 23:29:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:10.102 23:29:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:12:10.102 23:29:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:10.102 23:29:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.102 [2024-09-30 23:29:49.899398] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:10.102 23:29:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:10.102 23:29:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:12:10.102 23:29:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:12:10.102 23:29:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:10.103 23:29:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:10.103 23:29:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.362 23:29:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:10.362 23:29:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:12:10.362 23:29:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:12:10.362 23:29:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:12:10.362 23:29:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:12:10.362 23:29:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:12:10.362 23:29:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:10.362 23:29:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:12:10.362 23:29:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:10.362 23:29:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:12:10.362 23:29:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:10.362 23:29:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:12:10.362 23:29:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:10.362 23:29:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:10.362 23:29:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:12:10.362 [2024-09-30 23:29:50.198588] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:12:10.621 /dev/nbd0 00:12:10.621 23:29:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:10.621 23:29:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:10.621 23:29:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:12:10.621 23:29:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:12:10.621 23:29:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:10.621 23:29:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:10.622 23:29:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:12:10.622 23:29:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # break 00:12:10.622 23:29:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:10.622 23:29:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:10.622 23:29:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:10.622 1+0 records in 00:12:10.622 1+0 records out 00:12:10.622 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000384354 s, 10.7 MB/s 00:12:10.622 23:29:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:10.622 23:29:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:12:10.622 23:29:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:10.622 23:29:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:10.622 23:29:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:12:10.622 23:29:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:10.622 23:29:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:10.622 23:29:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:12:10.622 23:29:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:12:10.622 23:29:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:12:15.899 65536+0 records in 00:12:15.899 65536+0 records out 00:12:15.899 33554432 bytes (34 MB, 32 MiB) copied, 5.06335 s, 6.6 MB/s 00:12:15.899 23:29:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:12:15.899 23:29:55 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:15.899 23:29:55 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:12:15.899 23:29:55 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:15.899 23:29:55 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:12:15.899 23:29:55 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:15.899 23:29:55 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:12:15.899 [2024-09-30 23:29:55.530136] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:15.899 23:29:55 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:15.899 23:29:55 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:15.899 23:29:55 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:15.899 23:29:55 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:15.899 23:29:55 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:15.899 23:29:55 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:15.899 23:29:55 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:12:15.899 23:29:55 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:12:15.899 23:29:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:12:15.899 23:29:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:15.899 23:29:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:15.899 [2024-09-30 23:29:55.582010] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:15.899 23:29:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:15.899 23:29:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:15.899 23:29:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:15.899 23:29:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:15.899 23:29:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:15.899 23:29:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:15.899 23:29:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:15.899 23:29:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:15.899 23:29:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:15.899 23:29:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:15.899 23:29:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:15.899 23:29:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:15.899 23:29:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:15.899 23:29:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:15.899 23:29:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:15.899 23:29:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:15.899 23:29:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:15.899 "name": "raid_bdev1", 00:12:15.899 "uuid": "6ff5a3eb-3d7c-4fbf-921e-33718bdf4451", 00:12:15.899 "strip_size_kb": 0, 00:12:15.899 "state": "online", 00:12:15.899 "raid_level": "raid1", 00:12:15.899 "superblock": false, 00:12:15.899 "num_base_bdevs": 4, 00:12:15.899 "num_base_bdevs_discovered": 3, 00:12:15.899 "num_base_bdevs_operational": 3, 00:12:15.899 "base_bdevs_list": [ 00:12:15.899 { 00:12:15.899 "name": null, 00:12:15.899 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:15.899 "is_configured": false, 00:12:15.899 "data_offset": 0, 00:12:15.899 "data_size": 65536 00:12:15.899 }, 00:12:15.899 { 00:12:15.899 "name": "BaseBdev2", 00:12:15.899 "uuid": "be1aecfa-bf3b-5bb2-8fd4-3b73da0613f6", 00:12:15.899 "is_configured": true, 00:12:15.899 "data_offset": 0, 00:12:15.899 "data_size": 65536 00:12:15.899 }, 00:12:15.899 { 00:12:15.899 "name": "BaseBdev3", 00:12:15.899 "uuid": "e499c259-e73a-52af-b993-a0d84622d626", 00:12:15.899 "is_configured": true, 00:12:15.899 "data_offset": 0, 00:12:15.899 "data_size": 65536 00:12:15.899 }, 00:12:15.899 { 00:12:15.899 "name": "BaseBdev4", 00:12:15.899 "uuid": "57418f18-4644-5edc-a1e2-c16e8f2a3131", 00:12:15.899 "is_configured": true, 00:12:15.899 "data_offset": 0, 00:12:15.899 "data_size": 65536 00:12:15.899 } 00:12:15.899 ] 00:12:15.899 }' 00:12:15.899 23:29:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:15.899 23:29:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:16.466 23:29:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:16.466 23:29:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:16.466 23:29:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:16.466 [2024-09-30 23:29:56.049286] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:16.466 [2024-09-30 23:29:56.055206] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09bd0 00:12:16.466 23:29:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:16.466 23:29:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:12:16.466 [2024-09-30 23:29:56.057447] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:17.424 23:29:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:17.424 23:29:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:17.424 23:29:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:17.424 23:29:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:17.424 23:29:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:17.424 23:29:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:17.424 23:29:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.424 23:29:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:17.424 23:29:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:17.424 23:29:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.424 23:29:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:17.424 "name": "raid_bdev1", 00:12:17.424 "uuid": "6ff5a3eb-3d7c-4fbf-921e-33718bdf4451", 00:12:17.424 "strip_size_kb": 0, 00:12:17.424 "state": "online", 00:12:17.424 "raid_level": "raid1", 00:12:17.424 "superblock": false, 00:12:17.424 "num_base_bdevs": 4, 00:12:17.424 "num_base_bdevs_discovered": 4, 00:12:17.424 "num_base_bdevs_operational": 4, 00:12:17.424 "process": { 00:12:17.424 "type": "rebuild", 00:12:17.424 "target": "spare", 00:12:17.424 "progress": { 00:12:17.424 "blocks": 20480, 00:12:17.424 "percent": 31 00:12:17.424 } 00:12:17.424 }, 00:12:17.424 "base_bdevs_list": [ 00:12:17.424 { 00:12:17.424 "name": "spare", 00:12:17.424 "uuid": "61d3fa62-b2df-57b0-a8e3-2d64025cfe58", 00:12:17.424 "is_configured": true, 00:12:17.424 "data_offset": 0, 00:12:17.424 "data_size": 65536 00:12:17.424 }, 00:12:17.424 { 00:12:17.424 "name": "BaseBdev2", 00:12:17.424 "uuid": "be1aecfa-bf3b-5bb2-8fd4-3b73da0613f6", 00:12:17.424 "is_configured": true, 00:12:17.424 "data_offset": 0, 00:12:17.424 "data_size": 65536 00:12:17.424 }, 00:12:17.424 { 00:12:17.424 "name": "BaseBdev3", 00:12:17.424 "uuid": "e499c259-e73a-52af-b993-a0d84622d626", 00:12:17.424 "is_configured": true, 00:12:17.424 "data_offset": 0, 00:12:17.424 "data_size": 65536 00:12:17.425 }, 00:12:17.425 { 00:12:17.425 "name": "BaseBdev4", 00:12:17.425 "uuid": "57418f18-4644-5edc-a1e2-c16e8f2a3131", 00:12:17.425 "is_configured": true, 00:12:17.425 "data_offset": 0, 00:12:17.425 "data_size": 65536 00:12:17.425 } 00:12:17.425 ] 00:12:17.425 }' 00:12:17.425 23:29:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:17.425 23:29:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:17.425 23:29:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:17.425 23:29:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:17.425 23:29:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:12:17.425 23:29:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.425 23:29:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:17.425 [2024-09-30 23:29:57.197396] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:17.425 [2024-09-30 23:29:57.266106] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:12:17.425 [2024-09-30 23:29:57.266163] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:17.425 [2024-09-30 23:29:57.266183] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:17.425 [2024-09-30 23:29:57.266191] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:12:17.683 23:29:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.683 23:29:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:17.683 23:29:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:17.684 23:29:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:17.684 23:29:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:17.684 23:29:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:17.684 23:29:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:17.684 23:29:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:17.684 23:29:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:17.684 23:29:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:17.684 23:29:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:17.684 23:29:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:17.684 23:29:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:17.684 23:29:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.684 23:29:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:17.684 23:29:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.684 23:29:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:17.684 "name": "raid_bdev1", 00:12:17.684 "uuid": "6ff5a3eb-3d7c-4fbf-921e-33718bdf4451", 00:12:17.684 "strip_size_kb": 0, 00:12:17.684 "state": "online", 00:12:17.684 "raid_level": "raid1", 00:12:17.684 "superblock": false, 00:12:17.684 "num_base_bdevs": 4, 00:12:17.684 "num_base_bdevs_discovered": 3, 00:12:17.684 "num_base_bdevs_operational": 3, 00:12:17.684 "base_bdevs_list": [ 00:12:17.684 { 00:12:17.684 "name": null, 00:12:17.684 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:17.684 "is_configured": false, 00:12:17.684 "data_offset": 0, 00:12:17.684 "data_size": 65536 00:12:17.684 }, 00:12:17.684 { 00:12:17.684 "name": "BaseBdev2", 00:12:17.684 "uuid": "be1aecfa-bf3b-5bb2-8fd4-3b73da0613f6", 00:12:17.684 "is_configured": true, 00:12:17.684 "data_offset": 0, 00:12:17.684 "data_size": 65536 00:12:17.684 }, 00:12:17.684 { 00:12:17.684 "name": "BaseBdev3", 00:12:17.684 "uuid": "e499c259-e73a-52af-b993-a0d84622d626", 00:12:17.684 "is_configured": true, 00:12:17.684 "data_offset": 0, 00:12:17.684 "data_size": 65536 00:12:17.684 }, 00:12:17.684 { 00:12:17.684 "name": "BaseBdev4", 00:12:17.684 "uuid": "57418f18-4644-5edc-a1e2-c16e8f2a3131", 00:12:17.684 "is_configured": true, 00:12:17.684 "data_offset": 0, 00:12:17.684 "data_size": 65536 00:12:17.684 } 00:12:17.684 ] 00:12:17.684 }' 00:12:17.684 23:29:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:17.684 23:29:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:17.941 23:29:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:17.941 23:29:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:17.941 23:29:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:17.941 23:29:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:17.941 23:29:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:17.942 23:29:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:17.942 23:29:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:17.942 23:29:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.942 23:29:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:17.942 23:29:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.942 23:29:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:17.942 "name": "raid_bdev1", 00:12:17.942 "uuid": "6ff5a3eb-3d7c-4fbf-921e-33718bdf4451", 00:12:17.942 "strip_size_kb": 0, 00:12:17.942 "state": "online", 00:12:17.942 "raid_level": "raid1", 00:12:17.942 "superblock": false, 00:12:17.942 "num_base_bdevs": 4, 00:12:17.942 "num_base_bdevs_discovered": 3, 00:12:17.942 "num_base_bdevs_operational": 3, 00:12:17.942 "base_bdevs_list": [ 00:12:17.942 { 00:12:17.942 "name": null, 00:12:17.942 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:17.942 "is_configured": false, 00:12:17.942 "data_offset": 0, 00:12:17.942 "data_size": 65536 00:12:17.942 }, 00:12:17.942 { 00:12:17.942 "name": "BaseBdev2", 00:12:17.942 "uuid": "be1aecfa-bf3b-5bb2-8fd4-3b73da0613f6", 00:12:17.942 "is_configured": true, 00:12:17.942 "data_offset": 0, 00:12:17.942 "data_size": 65536 00:12:17.942 }, 00:12:17.942 { 00:12:17.942 "name": "BaseBdev3", 00:12:17.942 "uuid": "e499c259-e73a-52af-b993-a0d84622d626", 00:12:17.942 "is_configured": true, 00:12:17.942 "data_offset": 0, 00:12:17.942 "data_size": 65536 00:12:17.942 }, 00:12:17.942 { 00:12:17.942 "name": "BaseBdev4", 00:12:17.942 "uuid": "57418f18-4644-5edc-a1e2-c16e8f2a3131", 00:12:17.942 "is_configured": true, 00:12:17.942 "data_offset": 0, 00:12:17.942 "data_size": 65536 00:12:17.942 } 00:12:17.942 ] 00:12:17.942 }' 00:12:17.942 23:29:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:17.942 23:29:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:17.942 23:29:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:17.942 23:29:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:17.942 23:29:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:17.942 23:29:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.942 23:29:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:17.942 [2024-09-30 23:29:57.780462] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:17.942 [2024-09-30 23:29:57.785791] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09ca0 00:12:17.942 23:29:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.942 23:29:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:12:17.942 [2024-09-30 23:29:57.787941] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:19.315 23:29:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:19.315 23:29:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:19.315 23:29:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:19.315 23:29:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:19.315 23:29:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:19.315 23:29:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:19.315 23:29:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:19.315 23:29:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:19.315 23:29:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:19.315 23:29:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:19.315 23:29:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:19.315 "name": "raid_bdev1", 00:12:19.315 "uuid": "6ff5a3eb-3d7c-4fbf-921e-33718bdf4451", 00:12:19.315 "strip_size_kb": 0, 00:12:19.315 "state": "online", 00:12:19.315 "raid_level": "raid1", 00:12:19.315 "superblock": false, 00:12:19.315 "num_base_bdevs": 4, 00:12:19.315 "num_base_bdevs_discovered": 4, 00:12:19.315 "num_base_bdevs_operational": 4, 00:12:19.315 "process": { 00:12:19.315 "type": "rebuild", 00:12:19.315 "target": "spare", 00:12:19.315 "progress": { 00:12:19.315 "blocks": 20480, 00:12:19.315 "percent": 31 00:12:19.315 } 00:12:19.315 }, 00:12:19.315 "base_bdevs_list": [ 00:12:19.315 { 00:12:19.315 "name": "spare", 00:12:19.315 "uuid": "61d3fa62-b2df-57b0-a8e3-2d64025cfe58", 00:12:19.315 "is_configured": true, 00:12:19.315 "data_offset": 0, 00:12:19.315 "data_size": 65536 00:12:19.315 }, 00:12:19.316 { 00:12:19.316 "name": "BaseBdev2", 00:12:19.316 "uuid": "be1aecfa-bf3b-5bb2-8fd4-3b73da0613f6", 00:12:19.316 "is_configured": true, 00:12:19.316 "data_offset": 0, 00:12:19.316 "data_size": 65536 00:12:19.316 }, 00:12:19.316 { 00:12:19.316 "name": "BaseBdev3", 00:12:19.316 "uuid": "e499c259-e73a-52af-b993-a0d84622d626", 00:12:19.316 "is_configured": true, 00:12:19.316 "data_offset": 0, 00:12:19.316 "data_size": 65536 00:12:19.316 }, 00:12:19.316 { 00:12:19.316 "name": "BaseBdev4", 00:12:19.316 "uuid": "57418f18-4644-5edc-a1e2-c16e8f2a3131", 00:12:19.316 "is_configured": true, 00:12:19.316 "data_offset": 0, 00:12:19.316 "data_size": 65536 00:12:19.316 } 00:12:19.316 ] 00:12:19.316 }' 00:12:19.316 23:29:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:19.316 23:29:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:19.316 23:29:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:19.316 23:29:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:19.316 23:29:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:12:19.316 23:29:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:12:19.316 23:29:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:12:19.316 23:29:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:12:19.316 23:29:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:12:19.316 23:29:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:19.316 23:29:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:19.316 [2024-09-30 23:29:58.915996] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:19.316 [2024-09-30 23:29:58.995786] bdev_raid.c:1970:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000d09ca0 00:12:19.316 23:29:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:19.316 23:29:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:12:19.316 23:29:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:12:19.316 23:29:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:19.316 23:29:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:19.316 23:29:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:19.316 23:29:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:19.316 23:29:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:19.316 23:29:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:19.316 23:29:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:19.316 23:29:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:19.316 23:29:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:19.316 23:29:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:19.316 23:29:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:19.316 "name": "raid_bdev1", 00:12:19.316 "uuid": "6ff5a3eb-3d7c-4fbf-921e-33718bdf4451", 00:12:19.316 "strip_size_kb": 0, 00:12:19.316 "state": "online", 00:12:19.316 "raid_level": "raid1", 00:12:19.316 "superblock": false, 00:12:19.316 "num_base_bdevs": 4, 00:12:19.316 "num_base_bdevs_discovered": 3, 00:12:19.316 "num_base_bdevs_operational": 3, 00:12:19.316 "process": { 00:12:19.316 "type": "rebuild", 00:12:19.316 "target": "spare", 00:12:19.316 "progress": { 00:12:19.316 "blocks": 24576, 00:12:19.316 "percent": 37 00:12:19.316 } 00:12:19.316 }, 00:12:19.316 "base_bdevs_list": [ 00:12:19.316 { 00:12:19.316 "name": "spare", 00:12:19.316 "uuid": "61d3fa62-b2df-57b0-a8e3-2d64025cfe58", 00:12:19.316 "is_configured": true, 00:12:19.316 "data_offset": 0, 00:12:19.316 "data_size": 65536 00:12:19.316 }, 00:12:19.316 { 00:12:19.316 "name": null, 00:12:19.316 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:19.316 "is_configured": false, 00:12:19.316 "data_offset": 0, 00:12:19.316 "data_size": 65536 00:12:19.316 }, 00:12:19.316 { 00:12:19.316 "name": "BaseBdev3", 00:12:19.316 "uuid": "e499c259-e73a-52af-b993-a0d84622d626", 00:12:19.316 "is_configured": true, 00:12:19.316 "data_offset": 0, 00:12:19.316 "data_size": 65536 00:12:19.316 }, 00:12:19.316 { 00:12:19.316 "name": "BaseBdev4", 00:12:19.316 "uuid": "57418f18-4644-5edc-a1e2-c16e8f2a3131", 00:12:19.316 "is_configured": true, 00:12:19.316 "data_offset": 0, 00:12:19.316 "data_size": 65536 00:12:19.316 } 00:12:19.316 ] 00:12:19.316 }' 00:12:19.316 23:29:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:19.316 23:29:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:19.316 23:29:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:19.316 23:29:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:19.316 23:29:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=360 00:12:19.316 23:29:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:19.316 23:29:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:19.316 23:29:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:19.316 23:29:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:19.316 23:29:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:19.316 23:29:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:19.316 23:29:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:19.316 23:29:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:19.316 23:29:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:19.316 23:29:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:19.316 23:29:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:19.576 23:29:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:19.576 "name": "raid_bdev1", 00:12:19.576 "uuid": "6ff5a3eb-3d7c-4fbf-921e-33718bdf4451", 00:12:19.576 "strip_size_kb": 0, 00:12:19.576 "state": "online", 00:12:19.576 "raid_level": "raid1", 00:12:19.576 "superblock": false, 00:12:19.576 "num_base_bdevs": 4, 00:12:19.576 "num_base_bdevs_discovered": 3, 00:12:19.576 "num_base_bdevs_operational": 3, 00:12:19.576 "process": { 00:12:19.577 "type": "rebuild", 00:12:19.577 "target": "spare", 00:12:19.577 "progress": { 00:12:19.577 "blocks": 26624, 00:12:19.577 "percent": 40 00:12:19.577 } 00:12:19.577 }, 00:12:19.577 "base_bdevs_list": [ 00:12:19.577 { 00:12:19.577 "name": "spare", 00:12:19.577 "uuid": "61d3fa62-b2df-57b0-a8e3-2d64025cfe58", 00:12:19.577 "is_configured": true, 00:12:19.577 "data_offset": 0, 00:12:19.577 "data_size": 65536 00:12:19.577 }, 00:12:19.577 { 00:12:19.577 "name": null, 00:12:19.577 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:19.577 "is_configured": false, 00:12:19.577 "data_offset": 0, 00:12:19.577 "data_size": 65536 00:12:19.577 }, 00:12:19.577 { 00:12:19.577 "name": "BaseBdev3", 00:12:19.577 "uuid": "e499c259-e73a-52af-b993-a0d84622d626", 00:12:19.577 "is_configured": true, 00:12:19.577 "data_offset": 0, 00:12:19.577 "data_size": 65536 00:12:19.577 }, 00:12:19.577 { 00:12:19.577 "name": "BaseBdev4", 00:12:19.577 "uuid": "57418f18-4644-5edc-a1e2-c16e8f2a3131", 00:12:19.577 "is_configured": true, 00:12:19.577 "data_offset": 0, 00:12:19.577 "data_size": 65536 00:12:19.577 } 00:12:19.577 ] 00:12:19.577 }' 00:12:19.577 23:29:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:19.577 23:29:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:19.577 23:29:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:19.577 23:29:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:19.577 23:29:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:20.514 23:30:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:20.514 23:30:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:20.514 23:30:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:20.514 23:30:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:20.514 23:30:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:20.514 23:30:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:20.514 23:30:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:20.514 23:30:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.514 23:30:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:20.514 23:30:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:20.514 23:30:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.514 23:30:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:20.514 "name": "raid_bdev1", 00:12:20.514 "uuid": "6ff5a3eb-3d7c-4fbf-921e-33718bdf4451", 00:12:20.514 "strip_size_kb": 0, 00:12:20.514 "state": "online", 00:12:20.514 "raid_level": "raid1", 00:12:20.514 "superblock": false, 00:12:20.514 "num_base_bdevs": 4, 00:12:20.514 "num_base_bdevs_discovered": 3, 00:12:20.514 "num_base_bdevs_operational": 3, 00:12:20.514 "process": { 00:12:20.514 "type": "rebuild", 00:12:20.514 "target": "spare", 00:12:20.514 "progress": { 00:12:20.514 "blocks": 49152, 00:12:20.514 "percent": 75 00:12:20.514 } 00:12:20.514 }, 00:12:20.514 "base_bdevs_list": [ 00:12:20.514 { 00:12:20.514 "name": "spare", 00:12:20.514 "uuid": "61d3fa62-b2df-57b0-a8e3-2d64025cfe58", 00:12:20.514 "is_configured": true, 00:12:20.514 "data_offset": 0, 00:12:20.515 "data_size": 65536 00:12:20.515 }, 00:12:20.515 { 00:12:20.515 "name": null, 00:12:20.515 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:20.515 "is_configured": false, 00:12:20.515 "data_offset": 0, 00:12:20.515 "data_size": 65536 00:12:20.515 }, 00:12:20.515 { 00:12:20.515 "name": "BaseBdev3", 00:12:20.515 "uuid": "e499c259-e73a-52af-b993-a0d84622d626", 00:12:20.515 "is_configured": true, 00:12:20.515 "data_offset": 0, 00:12:20.515 "data_size": 65536 00:12:20.515 }, 00:12:20.515 { 00:12:20.515 "name": "BaseBdev4", 00:12:20.515 "uuid": "57418f18-4644-5edc-a1e2-c16e8f2a3131", 00:12:20.515 "is_configured": true, 00:12:20.515 "data_offset": 0, 00:12:20.515 "data_size": 65536 00:12:20.515 } 00:12:20.515 ] 00:12:20.515 }' 00:12:20.515 23:30:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:20.774 23:30:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:20.774 23:30:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:20.774 23:30:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:20.774 23:30:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:21.349 [2024-09-30 23:30:01.009106] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:12:21.349 [2024-09-30 23:30:01.009189] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:12:21.349 [2024-09-30 23:30:01.009231] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:21.645 23:30:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:21.645 23:30:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:21.645 23:30:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:21.645 23:30:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:21.645 23:30:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:21.645 23:30:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:21.645 23:30:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:21.645 23:30:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:21.645 23:30:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:21.645 23:30:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:21.645 23:30:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:21.645 23:30:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:21.645 "name": "raid_bdev1", 00:12:21.645 "uuid": "6ff5a3eb-3d7c-4fbf-921e-33718bdf4451", 00:12:21.645 "strip_size_kb": 0, 00:12:21.645 "state": "online", 00:12:21.645 "raid_level": "raid1", 00:12:21.645 "superblock": false, 00:12:21.645 "num_base_bdevs": 4, 00:12:21.645 "num_base_bdevs_discovered": 3, 00:12:21.645 "num_base_bdevs_operational": 3, 00:12:21.645 "base_bdevs_list": [ 00:12:21.645 { 00:12:21.645 "name": "spare", 00:12:21.645 "uuid": "61d3fa62-b2df-57b0-a8e3-2d64025cfe58", 00:12:21.645 "is_configured": true, 00:12:21.645 "data_offset": 0, 00:12:21.645 "data_size": 65536 00:12:21.645 }, 00:12:21.645 { 00:12:21.645 "name": null, 00:12:21.645 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:21.645 "is_configured": false, 00:12:21.645 "data_offset": 0, 00:12:21.645 "data_size": 65536 00:12:21.645 }, 00:12:21.645 { 00:12:21.645 "name": "BaseBdev3", 00:12:21.645 "uuid": "e499c259-e73a-52af-b993-a0d84622d626", 00:12:21.645 "is_configured": true, 00:12:21.645 "data_offset": 0, 00:12:21.645 "data_size": 65536 00:12:21.645 }, 00:12:21.645 { 00:12:21.645 "name": "BaseBdev4", 00:12:21.645 "uuid": "57418f18-4644-5edc-a1e2-c16e8f2a3131", 00:12:21.645 "is_configured": true, 00:12:21.645 "data_offset": 0, 00:12:21.645 "data_size": 65536 00:12:21.645 } 00:12:21.645 ] 00:12:21.645 }' 00:12:21.921 23:30:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:21.921 23:30:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:12:21.921 23:30:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:21.921 23:30:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:12:21.921 23:30:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:12:21.921 23:30:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:21.921 23:30:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:21.921 23:30:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:21.921 23:30:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:21.921 23:30:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:21.921 23:30:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:21.921 23:30:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:21.921 23:30:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:21.921 23:30:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:21.921 23:30:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:21.921 23:30:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:21.921 "name": "raid_bdev1", 00:12:21.921 "uuid": "6ff5a3eb-3d7c-4fbf-921e-33718bdf4451", 00:12:21.921 "strip_size_kb": 0, 00:12:21.921 "state": "online", 00:12:21.921 "raid_level": "raid1", 00:12:21.921 "superblock": false, 00:12:21.921 "num_base_bdevs": 4, 00:12:21.921 "num_base_bdevs_discovered": 3, 00:12:21.921 "num_base_bdevs_operational": 3, 00:12:21.921 "base_bdevs_list": [ 00:12:21.921 { 00:12:21.921 "name": "spare", 00:12:21.921 "uuid": "61d3fa62-b2df-57b0-a8e3-2d64025cfe58", 00:12:21.921 "is_configured": true, 00:12:21.921 "data_offset": 0, 00:12:21.921 "data_size": 65536 00:12:21.921 }, 00:12:21.921 { 00:12:21.921 "name": null, 00:12:21.921 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:21.921 "is_configured": false, 00:12:21.921 "data_offset": 0, 00:12:21.921 "data_size": 65536 00:12:21.921 }, 00:12:21.921 { 00:12:21.921 "name": "BaseBdev3", 00:12:21.921 "uuid": "e499c259-e73a-52af-b993-a0d84622d626", 00:12:21.921 "is_configured": true, 00:12:21.921 "data_offset": 0, 00:12:21.921 "data_size": 65536 00:12:21.921 }, 00:12:21.921 { 00:12:21.921 "name": "BaseBdev4", 00:12:21.921 "uuid": "57418f18-4644-5edc-a1e2-c16e8f2a3131", 00:12:21.921 "is_configured": true, 00:12:21.921 "data_offset": 0, 00:12:21.921 "data_size": 65536 00:12:21.921 } 00:12:21.921 ] 00:12:21.921 }' 00:12:21.921 23:30:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:21.921 23:30:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:21.921 23:30:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:21.921 23:30:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:21.921 23:30:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:21.921 23:30:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:21.921 23:30:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:21.921 23:30:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:21.921 23:30:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:21.921 23:30:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:21.921 23:30:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:21.921 23:30:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:21.921 23:30:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:21.921 23:30:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:21.921 23:30:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:21.921 23:30:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:21.921 23:30:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:21.921 23:30:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:21.921 23:30:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:21.921 23:30:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:21.921 "name": "raid_bdev1", 00:12:21.921 "uuid": "6ff5a3eb-3d7c-4fbf-921e-33718bdf4451", 00:12:21.921 "strip_size_kb": 0, 00:12:21.921 "state": "online", 00:12:21.921 "raid_level": "raid1", 00:12:21.921 "superblock": false, 00:12:21.921 "num_base_bdevs": 4, 00:12:21.921 "num_base_bdevs_discovered": 3, 00:12:21.921 "num_base_bdevs_operational": 3, 00:12:21.921 "base_bdevs_list": [ 00:12:21.921 { 00:12:21.921 "name": "spare", 00:12:21.921 "uuid": "61d3fa62-b2df-57b0-a8e3-2d64025cfe58", 00:12:21.921 "is_configured": true, 00:12:21.921 "data_offset": 0, 00:12:21.921 "data_size": 65536 00:12:21.921 }, 00:12:21.921 { 00:12:21.921 "name": null, 00:12:21.921 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:21.921 "is_configured": false, 00:12:21.921 "data_offset": 0, 00:12:21.921 "data_size": 65536 00:12:21.921 }, 00:12:21.921 { 00:12:21.921 "name": "BaseBdev3", 00:12:21.921 "uuid": "e499c259-e73a-52af-b993-a0d84622d626", 00:12:21.921 "is_configured": true, 00:12:21.921 "data_offset": 0, 00:12:21.921 "data_size": 65536 00:12:21.921 }, 00:12:21.921 { 00:12:21.921 "name": "BaseBdev4", 00:12:21.921 "uuid": "57418f18-4644-5edc-a1e2-c16e8f2a3131", 00:12:21.921 "is_configured": true, 00:12:21.921 "data_offset": 0, 00:12:21.921 "data_size": 65536 00:12:21.921 } 00:12:21.921 ] 00:12:21.922 }' 00:12:21.922 23:30:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:21.922 23:30:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:22.491 23:30:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:22.491 23:30:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.491 23:30:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:22.491 [2024-09-30 23:30:02.082035] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:22.491 [2024-09-30 23:30:02.082070] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:22.491 [2024-09-30 23:30:02.082180] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:22.491 [2024-09-30 23:30:02.082271] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:22.491 [2024-09-30 23:30:02.082285] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:12:22.491 23:30:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.491 23:30:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:22.491 23:30:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:12:22.491 23:30:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.491 23:30:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:22.491 23:30:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.491 23:30:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:12:22.491 23:30:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:12:22.491 23:30:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:12:22.491 23:30:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:12:22.491 23:30:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:22.491 23:30:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:12:22.491 23:30:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:22.491 23:30:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:22.491 23:30:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:22.491 23:30:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:12:22.491 23:30:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:22.491 23:30:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:22.491 23:30:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:12:22.491 /dev/nbd0 00:12:22.751 23:30:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:22.751 23:30:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:22.751 23:30:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:12:22.751 23:30:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:12:22.751 23:30:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:22.751 23:30:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:22.751 23:30:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:12:22.751 23:30:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # break 00:12:22.751 23:30:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:22.751 23:30:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:22.751 23:30:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:22.751 1+0 records in 00:12:22.751 1+0 records out 00:12:22.751 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000321685 s, 12.7 MB/s 00:12:22.751 23:30:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:22.751 23:30:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:12:22.751 23:30:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:22.751 23:30:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:22.751 23:30:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:12:22.751 23:30:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:22.751 23:30:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:22.751 23:30:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:12:22.751 /dev/nbd1 00:12:23.010 23:30:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:12:23.010 23:30:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:12:23.010 23:30:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:12:23.010 23:30:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:12:23.010 23:30:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:23.010 23:30:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:23.010 23:30:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:12:23.010 23:30:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # break 00:12:23.010 23:30:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:23.010 23:30:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:23.010 23:30:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:23.010 1+0 records in 00:12:23.010 1+0 records out 00:12:23.010 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000368903 s, 11.1 MB/s 00:12:23.010 23:30:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:23.010 23:30:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:12:23.010 23:30:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:23.010 23:30:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:23.010 23:30:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:12:23.010 23:30:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:23.010 23:30:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:23.010 23:30:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:12:23.010 23:30:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:12:23.010 23:30:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:23.010 23:30:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:23.010 23:30:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:23.010 23:30:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:12:23.010 23:30:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:23.010 23:30:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:12:23.269 23:30:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:23.269 23:30:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:23.269 23:30:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:23.269 23:30:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:23.269 23:30:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:23.269 23:30:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:23.269 23:30:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:12:23.269 23:30:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:12:23.269 23:30:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:23.269 23:30:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:12:23.529 23:30:03 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:12:23.529 23:30:03 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:12:23.529 23:30:03 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:12:23.529 23:30:03 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:23.529 23:30:03 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:23.529 23:30:03 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:23.529 23:30:03 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:12:23.529 23:30:03 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:12:23.529 23:30:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:12:23.529 23:30:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 88229 00:12:23.529 23:30:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@950 -- # '[' -z 88229 ']' 00:12:23.529 23:30:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@954 -- # kill -0 88229 00:12:23.529 23:30:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@955 -- # uname 00:12:23.529 23:30:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:23.529 23:30:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 88229 00:12:23.529 23:30:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:23.529 23:30:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:23.529 killing process with pid 88229 00:12:23.529 23:30:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 88229' 00:12:23.529 23:30:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@969 -- # kill 88229 00:12:23.529 Received shutdown signal, test time was about 60.000000 seconds 00:12:23.529 00:12:23.529 Latency(us) 00:12:23.529 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:23.529 =================================================================================================================== 00:12:23.529 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:12:23.529 [2024-09-30 23:30:03.194581] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:23.529 23:30:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@974 -- # wait 88229 00:12:23.529 [2024-09-30 23:30:03.287079] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:24.099 23:30:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:12:24.099 00:12:24.099 real 0m15.306s 00:12:24.099 user 0m17.110s 00:12:24.099 sys 0m3.143s 00:12:24.099 23:30:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:24.099 23:30:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:24.099 ************************************ 00:12:24.099 END TEST raid_rebuild_test 00:12:24.099 ************************************ 00:12:24.099 23:30:03 bdev_raid -- bdev/bdev_raid.sh@979 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 4 true false true 00:12:24.099 23:30:03 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:12:24.099 23:30:03 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:24.099 23:30:03 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:24.099 ************************************ 00:12:24.099 START TEST raid_rebuild_test_sb 00:12:24.099 ************************************ 00:12:24.099 23:30:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 4 true false true 00:12:24.099 23:30:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:12:24.099 23:30:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:12:24.099 23:30:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:12:24.099 23:30:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:12:24.099 23:30:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:12:24.099 23:30:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:12:24.099 23:30:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:24.100 23:30:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:12:24.100 23:30:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:24.100 23:30:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:24.100 23:30:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:12:24.100 23:30:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:24.100 23:30:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:24.100 23:30:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:12:24.100 23:30:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:24.100 23:30:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:24.100 23:30:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:12:24.100 23:30:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:24.100 23:30:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:24.100 23:30:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:12:24.100 23:30:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:12:24.100 23:30:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:12:24.100 23:30:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:12:24.100 23:30:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:12:24.100 23:30:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:12:24.100 23:30:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:12:24.100 23:30:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:12:24.100 23:30:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:12:24.100 23:30:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:12:24.100 23:30:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:12:24.100 23:30:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=88654 00:12:24.100 23:30:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:12:24.100 23:30:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 88654 00:12:24.100 23:30:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@831 -- # '[' -z 88654 ']' 00:12:24.100 23:30:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:24.100 23:30:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:24.100 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:24.100 23:30:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:24.100 23:30:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:24.100 23:30:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:24.100 I/O size of 3145728 is greater than zero copy threshold (65536). 00:12:24.100 Zero copy mechanism will not be used. 00:12:24.100 [2024-09-30 23:30:03.827824] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:12:24.100 [2024-09-30 23:30:03.827969] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88654 ] 00:12:24.359 [2024-09-30 23:30:03.994530] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:24.359 [2024-09-30 23:30:04.063023] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:12:24.359 [2024-09-30 23:30:04.138271] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:24.359 [2024-09-30 23:30:04.138309] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:24.928 23:30:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:24.928 23:30:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@864 -- # return 0 00:12:24.928 23:30:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:24.928 23:30:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:24.928 23:30:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:24.928 23:30:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:24.928 BaseBdev1_malloc 00:12:24.928 23:30:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:24.928 23:30:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:12:24.928 23:30:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:24.928 23:30:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:24.928 [2024-09-30 23:30:04.664519] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:12:24.928 [2024-09-30 23:30:04.664599] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:24.928 [2024-09-30 23:30:04.664625] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:12:24.928 [2024-09-30 23:30:04.664649] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:24.928 [2024-09-30 23:30:04.667021] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:24.928 [2024-09-30 23:30:04.667052] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:24.928 BaseBdev1 00:12:24.928 23:30:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:24.928 23:30:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:24.929 23:30:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:24.929 23:30:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:24.929 23:30:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:24.929 BaseBdev2_malloc 00:12:24.929 23:30:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:24.929 23:30:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:12:24.929 23:30:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:24.929 23:30:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:24.929 [2024-09-30 23:30:04.708018] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:12:24.929 [2024-09-30 23:30:04.708067] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:24.929 [2024-09-30 23:30:04.708089] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:12:24.929 [2024-09-30 23:30:04.708098] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:24.929 [2024-09-30 23:30:04.710394] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:24.929 [2024-09-30 23:30:04.710423] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:24.929 BaseBdev2 00:12:24.929 23:30:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:24.929 23:30:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:24.929 23:30:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:12:24.929 23:30:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:24.929 23:30:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:24.929 BaseBdev3_malloc 00:12:24.929 23:30:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:24.929 23:30:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:12:24.929 23:30:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:24.929 23:30:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:24.929 [2024-09-30 23:30:04.742448] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:12:24.929 [2024-09-30 23:30:04.742489] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:24.929 [2024-09-30 23:30:04.742514] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:12:24.929 [2024-09-30 23:30:04.742522] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:24.929 [2024-09-30 23:30:04.744815] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:24.929 [2024-09-30 23:30:04.744845] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:12:24.929 BaseBdev3 00:12:24.929 23:30:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:24.929 23:30:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:24.929 23:30:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:12:24.929 23:30:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:24.929 23:30:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:24.929 BaseBdev4_malloc 00:12:24.929 23:30:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:24.929 23:30:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:12:24.929 23:30:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:24.929 23:30:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:24.929 [2024-09-30 23:30:04.776827] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:12:24.929 [2024-09-30 23:30:04.776900] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:24.929 [2024-09-30 23:30:04.776926] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:12:24.929 [2024-09-30 23:30:04.776935] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:24.929 [2024-09-30 23:30:04.779371] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:24.929 [2024-09-30 23:30:04.779399] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:12:24.929 BaseBdev4 00:12:24.929 23:30:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:25.189 23:30:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:12:25.189 23:30:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:25.189 23:30:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:25.189 spare_malloc 00:12:25.189 23:30:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:25.189 23:30:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:12:25.189 23:30:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:25.189 23:30:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:25.189 spare_delay 00:12:25.189 23:30:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:25.189 23:30:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:12:25.189 23:30:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:25.189 23:30:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:25.189 [2024-09-30 23:30:04.823251] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:12:25.189 [2024-09-30 23:30:04.823296] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:25.189 [2024-09-30 23:30:04.823317] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:12:25.189 [2024-09-30 23:30:04.823325] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:25.189 [2024-09-30 23:30:04.825640] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:25.189 [2024-09-30 23:30:04.825670] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:12:25.189 spare 00:12:25.189 23:30:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:25.189 23:30:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:12:25.189 23:30:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:25.189 23:30:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:25.189 [2024-09-30 23:30:04.835344] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:25.189 [2024-09-30 23:30:04.837410] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:25.189 [2024-09-30 23:30:04.837482] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:25.189 [2024-09-30 23:30:04.837524] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:25.189 [2024-09-30 23:30:04.837690] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:12:25.189 [2024-09-30 23:30:04.837704] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:25.189 [2024-09-30 23:30:04.837967] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:12:25.189 [2024-09-30 23:30:04.838133] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:12:25.189 [2024-09-30 23:30:04.838153] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:12:25.189 [2024-09-30 23:30:04.838278] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:25.189 23:30:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:25.189 23:30:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:12:25.189 23:30:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:25.189 23:30:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:25.189 23:30:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:25.189 23:30:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:25.189 23:30:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:25.189 23:30:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:25.189 23:30:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:25.189 23:30:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:25.189 23:30:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:25.189 23:30:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:25.189 23:30:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:25.189 23:30:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:25.189 23:30:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:25.189 23:30:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:25.189 23:30:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:25.189 "name": "raid_bdev1", 00:12:25.189 "uuid": "1333bf72-5477-4cc1-a3da-9548cbfea4d0", 00:12:25.189 "strip_size_kb": 0, 00:12:25.189 "state": "online", 00:12:25.189 "raid_level": "raid1", 00:12:25.189 "superblock": true, 00:12:25.189 "num_base_bdevs": 4, 00:12:25.189 "num_base_bdevs_discovered": 4, 00:12:25.189 "num_base_bdevs_operational": 4, 00:12:25.189 "base_bdevs_list": [ 00:12:25.189 { 00:12:25.189 "name": "BaseBdev1", 00:12:25.189 "uuid": "afd60098-d8a0-5407-a2a2-b25aab120181", 00:12:25.189 "is_configured": true, 00:12:25.189 "data_offset": 2048, 00:12:25.189 "data_size": 63488 00:12:25.189 }, 00:12:25.189 { 00:12:25.189 "name": "BaseBdev2", 00:12:25.189 "uuid": "a99e16cf-5b8e-5f5a-8bb5-0a5a507535ef", 00:12:25.189 "is_configured": true, 00:12:25.189 "data_offset": 2048, 00:12:25.189 "data_size": 63488 00:12:25.189 }, 00:12:25.189 { 00:12:25.189 "name": "BaseBdev3", 00:12:25.189 "uuid": "7066e3ad-b371-518f-9f3c-398a86390782", 00:12:25.189 "is_configured": true, 00:12:25.189 "data_offset": 2048, 00:12:25.189 "data_size": 63488 00:12:25.189 }, 00:12:25.189 { 00:12:25.189 "name": "BaseBdev4", 00:12:25.189 "uuid": "e02c0b92-3e67-54ad-b993-015294ca32a1", 00:12:25.189 "is_configured": true, 00:12:25.189 "data_offset": 2048, 00:12:25.189 "data_size": 63488 00:12:25.189 } 00:12:25.189 ] 00:12:25.189 }' 00:12:25.189 23:30:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:25.189 23:30:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:25.758 23:30:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:12:25.758 23:30:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:25.758 23:30:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:25.758 23:30:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:25.758 [2024-09-30 23:30:05.314712] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:25.758 23:30:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:25.758 23:30:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:12:25.758 23:30:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:25.758 23:30:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:12:25.758 23:30:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:25.758 23:30:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:25.758 23:30:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:25.758 23:30:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:12:25.758 23:30:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:12:25.758 23:30:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:12:25.758 23:30:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:12:25.758 23:30:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:12:25.758 23:30:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:25.758 23:30:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:12:25.759 23:30:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:25.759 23:30:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:12:25.759 23:30:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:25.759 23:30:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:12:25.759 23:30:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:25.759 23:30:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:25.759 23:30:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:12:25.759 [2024-09-30 23:30:05.582088] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:12:25.759 /dev/nbd0 00:12:26.018 23:30:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:26.018 23:30:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:26.018 23:30:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:12:26.018 23:30:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:12:26.018 23:30:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:26.018 23:30:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:26.018 23:30:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:12:26.018 23:30:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:12:26.018 23:30:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:26.018 23:30:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:26.018 23:30:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:26.018 1+0 records in 00:12:26.018 1+0 records out 00:12:26.018 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000464834 s, 8.8 MB/s 00:12:26.018 23:30:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:26.018 23:30:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:12:26.018 23:30:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:26.018 23:30:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:26.018 23:30:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:12:26.018 23:30:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:26.018 23:30:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:26.018 23:30:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:12:26.018 23:30:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:12:26.018 23:30:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:12:31.293 63488+0 records in 00:12:31.293 63488+0 records out 00:12:31.293 32505856 bytes (33 MB, 31 MiB) copied, 5.13087 s, 6.3 MB/s 00:12:31.293 23:30:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:12:31.293 23:30:10 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:31.293 23:30:10 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:12:31.293 23:30:10 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:31.293 23:30:10 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:12:31.293 23:30:10 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:31.293 23:30:10 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:12:31.293 [2024-09-30 23:30:10.989022] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:31.293 23:30:11 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:31.293 23:30:11 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:31.293 23:30:11 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:31.293 23:30:11 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:31.293 23:30:11 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:31.293 23:30:11 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:31.293 23:30:11 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:12:31.293 23:30:11 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:12:31.293 23:30:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:12:31.293 23:30:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:31.293 23:30:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:31.293 [2024-09-30 23:30:11.021026] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:31.293 23:30:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:31.293 23:30:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:31.293 23:30:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:31.293 23:30:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:31.293 23:30:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:31.293 23:30:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:31.293 23:30:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:31.293 23:30:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:31.293 23:30:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:31.293 23:30:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:31.293 23:30:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:31.293 23:30:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:31.293 23:30:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:31.293 23:30:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:31.293 23:30:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:31.293 23:30:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:31.293 23:30:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:31.293 "name": "raid_bdev1", 00:12:31.293 "uuid": "1333bf72-5477-4cc1-a3da-9548cbfea4d0", 00:12:31.293 "strip_size_kb": 0, 00:12:31.293 "state": "online", 00:12:31.293 "raid_level": "raid1", 00:12:31.293 "superblock": true, 00:12:31.294 "num_base_bdevs": 4, 00:12:31.294 "num_base_bdevs_discovered": 3, 00:12:31.294 "num_base_bdevs_operational": 3, 00:12:31.294 "base_bdevs_list": [ 00:12:31.294 { 00:12:31.294 "name": null, 00:12:31.294 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:31.294 "is_configured": false, 00:12:31.294 "data_offset": 0, 00:12:31.294 "data_size": 63488 00:12:31.294 }, 00:12:31.294 { 00:12:31.294 "name": "BaseBdev2", 00:12:31.294 "uuid": "a99e16cf-5b8e-5f5a-8bb5-0a5a507535ef", 00:12:31.294 "is_configured": true, 00:12:31.294 "data_offset": 2048, 00:12:31.294 "data_size": 63488 00:12:31.294 }, 00:12:31.294 { 00:12:31.294 "name": "BaseBdev3", 00:12:31.294 "uuid": "7066e3ad-b371-518f-9f3c-398a86390782", 00:12:31.294 "is_configured": true, 00:12:31.294 "data_offset": 2048, 00:12:31.294 "data_size": 63488 00:12:31.294 }, 00:12:31.294 { 00:12:31.294 "name": "BaseBdev4", 00:12:31.294 "uuid": "e02c0b92-3e67-54ad-b993-015294ca32a1", 00:12:31.294 "is_configured": true, 00:12:31.294 "data_offset": 2048, 00:12:31.294 "data_size": 63488 00:12:31.294 } 00:12:31.294 ] 00:12:31.294 }' 00:12:31.294 23:30:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:31.294 23:30:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:31.862 23:30:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:31.862 23:30:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:31.862 23:30:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:31.862 [2024-09-30 23:30:11.464296] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:31.862 [2024-09-30 23:30:11.470151] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3360 00:12:31.862 23:30:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:31.862 23:30:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:12:31.862 [2024-09-30 23:30:11.472360] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:32.800 23:30:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:32.800 23:30:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:32.800 23:30:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:32.800 23:30:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:32.800 23:30:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:32.800 23:30:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:32.800 23:30:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:32.800 23:30:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:32.800 23:30:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:32.800 23:30:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:32.800 23:30:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:32.800 "name": "raid_bdev1", 00:12:32.800 "uuid": "1333bf72-5477-4cc1-a3da-9548cbfea4d0", 00:12:32.800 "strip_size_kb": 0, 00:12:32.800 "state": "online", 00:12:32.800 "raid_level": "raid1", 00:12:32.800 "superblock": true, 00:12:32.800 "num_base_bdevs": 4, 00:12:32.800 "num_base_bdevs_discovered": 4, 00:12:32.800 "num_base_bdevs_operational": 4, 00:12:32.800 "process": { 00:12:32.800 "type": "rebuild", 00:12:32.800 "target": "spare", 00:12:32.800 "progress": { 00:12:32.800 "blocks": 20480, 00:12:32.800 "percent": 32 00:12:32.800 } 00:12:32.800 }, 00:12:32.800 "base_bdevs_list": [ 00:12:32.800 { 00:12:32.800 "name": "spare", 00:12:32.800 "uuid": "24d191ef-ba72-516f-afb5-09961d2b863f", 00:12:32.800 "is_configured": true, 00:12:32.800 "data_offset": 2048, 00:12:32.800 "data_size": 63488 00:12:32.800 }, 00:12:32.800 { 00:12:32.800 "name": "BaseBdev2", 00:12:32.800 "uuid": "a99e16cf-5b8e-5f5a-8bb5-0a5a507535ef", 00:12:32.800 "is_configured": true, 00:12:32.800 "data_offset": 2048, 00:12:32.800 "data_size": 63488 00:12:32.800 }, 00:12:32.800 { 00:12:32.800 "name": "BaseBdev3", 00:12:32.800 "uuid": "7066e3ad-b371-518f-9f3c-398a86390782", 00:12:32.800 "is_configured": true, 00:12:32.800 "data_offset": 2048, 00:12:32.800 "data_size": 63488 00:12:32.800 }, 00:12:32.800 { 00:12:32.800 "name": "BaseBdev4", 00:12:32.800 "uuid": "e02c0b92-3e67-54ad-b993-015294ca32a1", 00:12:32.800 "is_configured": true, 00:12:32.800 "data_offset": 2048, 00:12:32.800 "data_size": 63488 00:12:32.800 } 00:12:32.800 ] 00:12:32.800 }' 00:12:32.800 23:30:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:32.800 23:30:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:32.800 23:30:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:32.800 23:30:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:32.800 23:30:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:12:32.800 23:30:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:32.800 23:30:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:32.800 [2024-09-30 23:30:12.628027] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:33.059 [2024-09-30 23:30:12.680751] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:12:33.059 [2024-09-30 23:30:12.680822] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:33.059 [2024-09-30 23:30:12.680844] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:33.059 [2024-09-30 23:30:12.680852] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:12:33.059 23:30:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.059 23:30:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:33.059 23:30:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:33.059 23:30:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:33.059 23:30:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:33.059 23:30:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:33.059 23:30:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:33.059 23:30:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:33.059 23:30:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:33.059 23:30:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:33.059 23:30:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:33.059 23:30:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:33.059 23:30:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.059 23:30:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:33.059 23:30:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:33.059 23:30:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.059 23:30:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:33.059 "name": "raid_bdev1", 00:12:33.059 "uuid": "1333bf72-5477-4cc1-a3da-9548cbfea4d0", 00:12:33.059 "strip_size_kb": 0, 00:12:33.059 "state": "online", 00:12:33.059 "raid_level": "raid1", 00:12:33.059 "superblock": true, 00:12:33.059 "num_base_bdevs": 4, 00:12:33.059 "num_base_bdevs_discovered": 3, 00:12:33.059 "num_base_bdevs_operational": 3, 00:12:33.059 "base_bdevs_list": [ 00:12:33.059 { 00:12:33.059 "name": null, 00:12:33.059 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:33.059 "is_configured": false, 00:12:33.059 "data_offset": 0, 00:12:33.059 "data_size": 63488 00:12:33.059 }, 00:12:33.059 { 00:12:33.059 "name": "BaseBdev2", 00:12:33.059 "uuid": "a99e16cf-5b8e-5f5a-8bb5-0a5a507535ef", 00:12:33.059 "is_configured": true, 00:12:33.059 "data_offset": 2048, 00:12:33.059 "data_size": 63488 00:12:33.059 }, 00:12:33.059 { 00:12:33.059 "name": "BaseBdev3", 00:12:33.059 "uuid": "7066e3ad-b371-518f-9f3c-398a86390782", 00:12:33.059 "is_configured": true, 00:12:33.059 "data_offset": 2048, 00:12:33.059 "data_size": 63488 00:12:33.059 }, 00:12:33.059 { 00:12:33.059 "name": "BaseBdev4", 00:12:33.059 "uuid": "e02c0b92-3e67-54ad-b993-015294ca32a1", 00:12:33.059 "is_configured": true, 00:12:33.059 "data_offset": 2048, 00:12:33.059 "data_size": 63488 00:12:33.059 } 00:12:33.059 ] 00:12:33.059 }' 00:12:33.059 23:30:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:33.059 23:30:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:33.318 23:30:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:33.318 23:30:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:33.318 23:30:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:33.318 23:30:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:33.318 23:30:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:33.318 23:30:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:33.318 23:30:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.318 23:30:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:33.318 23:30:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:33.318 23:30:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.318 23:30:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:33.318 "name": "raid_bdev1", 00:12:33.318 "uuid": "1333bf72-5477-4cc1-a3da-9548cbfea4d0", 00:12:33.318 "strip_size_kb": 0, 00:12:33.318 "state": "online", 00:12:33.318 "raid_level": "raid1", 00:12:33.318 "superblock": true, 00:12:33.318 "num_base_bdevs": 4, 00:12:33.318 "num_base_bdevs_discovered": 3, 00:12:33.318 "num_base_bdevs_operational": 3, 00:12:33.318 "base_bdevs_list": [ 00:12:33.318 { 00:12:33.318 "name": null, 00:12:33.318 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:33.318 "is_configured": false, 00:12:33.318 "data_offset": 0, 00:12:33.318 "data_size": 63488 00:12:33.318 }, 00:12:33.318 { 00:12:33.318 "name": "BaseBdev2", 00:12:33.318 "uuid": "a99e16cf-5b8e-5f5a-8bb5-0a5a507535ef", 00:12:33.318 "is_configured": true, 00:12:33.318 "data_offset": 2048, 00:12:33.318 "data_size": 63488 00:12:33.318 }, 00:12:33.318 { 00:12:33.318 "name": "BaseBdev3", 00:12:33.318 "uuid": "7066e3ad-b371-518f-9f3c-398a86390782", 00:12:33.318 "is_configured": true, 00:12:33.318 "data_offset": 2048, 00:12:33.318 "data_size": 63488 00:12:33.318 }, 00:12:33.318 { 00:12:33.318 "name": "BaseBdev4", 00:12:33.318 "uuid": "e02c0b92-3e67-54ad-b993-015294ca32a1", 00:12:33.318 "is_configured": true, 00:12:33.318 "data_offset": 2048, 00:12:33.318 "data_size": 63488 00:12:33.318 } 00:12:33.318 ] 00:12:33.318 }' 00:12:33.577 23:30:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:33.577 23:30:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:33.577 23:30:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:33.577 23:30:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:33.577 23:30:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:33.577 23:30:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.577 23:30:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:33.577 [2024-09-30 23:30:13.246708] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:33.577 [2024-09-30 23:30:13.252063] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3430 00:12:33.577 23:30:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.577 23:30:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:12:33.577 [2024-09-30 23:30:13.254162] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:34.515 23:30:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:34.515 23:30:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:34.515 23:30:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:34.515 23:30:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:34.515 23:30:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:34.515 23:30:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:34.515 23:30:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:34.515 23:30:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:34.515 23:30:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:34.515 23:30:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:34.515 23:30:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:34.515 "name": "raid_bdev1", 00:12:34.515 "uuid": "1333bf72-5477-4cc1-a3da-9548cbfea4d0", 00:12:34.515 "strip_size_kb": 0, 00:12:34.515 "state": "online", 00:12:34.515 "raid_level": "raid1", 00:12:34.515 "superblock": true, 00:12:34.515 "num_base_bdevs": 4, 00:12:34.515 "num_base_bdevs_discovered": 4, 00:12:34.515 "num_base_bdevs_operational": 4, 00:12:34.515 "process": { 00:12:34.515 "type": "rebuild", 00:12:34.515 "target": "spare", 00:12:34.515 "progress": { 00:12:34.515 "blocks": 20480, 00:12:34.515 "percent": 32 00:12:34.515 } 00:12:34.515 }, 00:12:34.515 "base_bdevs_list": [ 00:12:34.515 { 00:12:34.515 "name": "spare", 00:12:34.515 "uuid": "24d191ef-ba72-516f-afb5-09961d2b863f", 00:12:34.515 "is_configured": true, 00:12:34.515 "data_offset": 2048, 00:12:34.515 "data_size": 63488 00:12:34.515 }, 00:12:34.515 { 00:12:34.515 "name": "BaseBdev2", 00:12:34.515 "uuid": "a99e16cf-5b8e-5f5a-8bb5-0a5a507535ef", 00:12:34.515 "is_configured": true, 00:12:34.515 "data_offset": 2048, 00:12:34.515 "data_size": 63488 00:12:34.515 }, 00:12:34.515 { 00:12:34.515 "name": "BaseBdev3", 00:12:34.515 "uuid": "7066e3ad-b371-518f-9f3c-398a86390782", 00:12:34.515 "is_configured": true, 00:12:34.515 "data_offset": 2048, 00:12:34.515 "data_size": 63488 00:12:34.515 }, 00:12:34.515 { 00:12:34.515 "name": "BaseBdev4", 00:12:34.515 "uuid": "e02c0b92-3e67-54ad-b993-015294ca32a1", 00:12:34.515 "is_configured": true, 00:12:34.515 "data_offset": 2048, 00:12:34.515 "data_size": 63488 00:12:34.515 } 00:12:34.515 ] 00:12:34.515 }' 00:12:34.515 23:30:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:34.515 23:30:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:34.515 23:30:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:34.774 23:30:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:34.774 23:30:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:12:34.774 23:30:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:12:34.774 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:12:34.774 23:30:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:12:34.774 23:30:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:12:34.774 23:30:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:12:34.774 23:30:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:12:34.774 23:30:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:34.774 23:30:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:34.774 [2024-09-30 23:30:14.418132] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:34.775 [2024-09-30 23:30:14.561744] bdev_raid.c:1970:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000ca3430 00:12:34.775 23:30:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:34.775 23:30:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:12:34.775 23:30:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:12:34.775 23:30:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:34.775 23:30:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:34.775 23:30:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:34.775 23:30:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:34.775 23:30:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:34.775 23:30:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:34.775 23:30:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:34.775 23:30:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:34.775 23:30:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:34.775 23:30:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:34.775 23:30:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:34.775 "name": "raid_bdev1", 00:12:34.775 "uuid": "1333bf72-5477-4cc1-a3da-9548cbfea4d0", 00:12:34.775 "strip_size_kb": 0, 00:12:34.775 "state": "online", 00:12:34.775 "raid_level": "raid1", 00:12:34.775 "superblock": true, 00:12:34.775 "num_base_bdevs": 4, 00:12:34.775 "num_base_bdevs_discovered": 3, 00:12:34.775 "num_base_bdevs_operational": 3, 00:12:34.775 "process": { 00:12:34.775 "type": "rebuild", 00:12:34.775 "target": "spare", 00:12:34.775 "progress": { 00:12:34.775 "blocks": 24576, 00:12:34.775 "percent": 38 00:12:34.775 } 00:12:34.775 }, 00:12:34.775 "base_bdevs_list": [ 00:12:34.775 { 00:12:34.775 "name": "spare", 00:12:34.775 "uuid": "24d191ef-ba72-516f-afb5-09961d2b863f", 00:12:34.775 "is_configured": true, 00:12:34.775 "data_offset": 2048, 00:12:34.775 "data_size": 63488 00:12:34.775 }, 00:12:34.775 { 00:12:34.775 "name": null, 00:12:34.775 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:34.775 "is_configured": false, 00:12:34.775 "data_offset": 0, 00:12:34.775 "data_size": 63488 00:12:34.775 }, 00:12:34.775 { 00:12:34.775 "name": "BaseBdev3", 00:12:34.775 "uuid": "7066e3ad-b371-518f-9f3c-398a86390782", 00:12:34.775 "is_configured": true, 00:12:34.775 "data_offset": 2048, 00:12:34.775 "data_size": 63488 00:12:34.775 }, 00:12:34.775 { 00:12:34.775 "name": "BaseBdev4", 00:12:34.775 "uuid": "e02c0b92-3e67-54ad-b993-015294ca32a1", 00:12:34.775 "is_configured": true, 00:12:34.775 "data_offset": 2048, 00:12:34.775 "data_size": 63488 00:12:34.775 } 00:12:34.775 ] 00:12:34.775 }' 00:12:34.775 23:30:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:35.034 23:30:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:35.034 23:30:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:35.034 23:30:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:35.034 23:30:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=375 00:12:35.034 23:30:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:35.034 23:30:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:35.034 23:30:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:35.034 23:30:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:35.034 23:30:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:35.034 23:30:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:35.034 23:30:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:35.034 23:30:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:35.035 23:30:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.035 23:30:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:35.035 23:30:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.035 23:30:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:35.035 "name": "raid_bdev1", 00:12:35.035 "uuid": "1333bf72-5477-4cc1-a3da-9548cbfea4d0", 00:12:35.035 "strip_size_kb": 0, 00:12:35.035 "state": "online", 00:12:35.035 "raid_level": "raid1", 00:12:35.035 "superblock": true, 00:12:35.035 "num_base_bdevs": 4, 00:12:35.035 "num_base_bdevs_discovered": 3, 00:12:35.035 "num_base_bdevs_operational": 3, 00:12:35.035 "process": { 00:12:35.035 "type": "rebuild", 00:12:35.035 "target": "spare", 00:12:35.035 "progress": { 00:12:35.035 "blocks": 26624, 00:12:35.035 "percent": 41 00:12:35.035 } 00:12:35.035 }, 00:12:35.035 "base_bdevs_list": [ 00:12:35.035 { 00:12:35.035 "name": "spare", 00:12:35.035 "uuid": "24d191ef-ba72-516f-afb5-09961d2b863f", 00:12:35.035 "is_configured": true, 00:12:35.035 "data_offset": 2048, 00:12:35.035 "data_size": 63488 00:12:35.035 }, 00:12:35.035 { 00:12:35.035 "name": null, 00:12:35.035 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:35.035 "is_configured": false, 00:12:35.035 "data_offset": 0, 00:12:35.035 "data_size": 63488 00:12:35.035 }, 00:12:35.035 { 00:12:35.035 "name": "BaseBdev3", 00:12:35.035 "uuid": "7066e3ad-b371-518f-9f3c-398a86390782", 00:12:35.035 "is_configured": true, 00:12:35.035 "data_offset": 2048, 00:12:35.035 "data_size": 63488 00:12:35.035 }, 00:12:35.035 { 00:12:35.035 "name": "BaseBdev4", 00:12:35.035 "uuid": "e02c0b92-3e67-54ad-b993-015294ca32a1", 00:12:35.035 "is_configured": true, 00:12:35.035 "data_offset": 2048, 00:12:35.035 "data_size": 63488 00:12:35.035 } 00:12:35.035 ] 00:12:35.035 }' 00:12:35.035 23:30:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:35.035 23:30:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:35.035 23:30:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:35.035 23:30:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:35.035 23:30:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:35.972 23:30:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:35.972 23:30:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:35.972 23:30:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:35.972 23:30:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:35.972 23:30:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:35.972 23:30:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:36.233 23:30:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:36.233 23:30:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.233 23:30:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:36.233 23:30:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:36.233 23:30:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.233 23:30:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:36.233 "name": "raid_bdev1", 00:12:36.233 "uuid": "1333bf72-5477-4cc1-a3da-9548cbfea4d0", 00:12:36.233 "strip_size_kb": 0, 00:12:36.233 "state": "online", 00:12:36.233 "raid_level": "raid1", 00:12:36.233 "superblock": true, 00:12:36.233 "num_base_bdevs": 4, 00:12:36.233 "num_base_bdevs_discovered": 3, 00:12:36.233 "num_base_bdevs_operational": 3, 00:12:36.233 "process": { 00:12:36.233 "type": "rebuild", 00:12:36.233 "target": "spare", 00:12:36.233 "progress": { 00:12:36.233 "blocks": 49152, 00:12:36.233 "percent": 77 00:12:36.233 } 00:12:36.233 }, 00:12:36.233 "base_bdevs_list": [ 00:12:36.233 { 00:12:36.233 "name": "spare", 00:12:36.233 "uuid": "24d191ef-ba72-516f-afb5-09961d2b863f", 00:12:36.233 "is_configured": true, 00:12:36.233 "data_offset": 2048, 00:12:36.233 "data_size": 63488 00:12:36.233 }, 00:12:36.233 { 00:12:36.233 "name": null, 00:12:36.233 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:36.233 "is_configured": false, 00:12:36.233 "data_offset": 0, 00:12:36.233 "data_size": 63488 00:12:36.233 }, 00:12:36.233 { 00:12:36.233 "name": "BaseBdev3", 00:12:36.233 "uuid": "7066e3ad-b371-518f-9f3c-398a86390782", 00:12:36.233 "is_configured": true, 00:12:36.233 "data_offset": 2048, 00:12:36.233 "data_size": 63488 00:12:36.233 }, 00:12:36.233 { 00:12:36.233 "name": "BaseBdev4", 00:12:36.233 "uuid": "e02c0b92-3e67-54ad-b993-015294ca32a1", 00:12:36.233 "is_configured": true, 00:12:36.233 "data_offset": 2048, 00:12:36.233 "data_size": 63488 00:12:36.233 } 00:12:36.233 ] 00:12:36.233 }' 00:12:36.233 23:30:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:36.233 23:30:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:36.233 23:30:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:36.233 23:30:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:36.233 23:30:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:36.802 [2024-09-30 23:30:16.474208] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:12:36.802 [2024-09-30 23:30:16.474295] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:12:36.802 [2024-09-30 23:30:16.474408] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:37.371 23:30:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:37.371 23:30:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:37.371 23:30:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:37.371 23:30:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:37.371 23:30:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:37.371 23:30:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:37.371 23:30:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:37.371 23:30:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:37.371 23:30:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:37.371 23:30:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:37.371 23:30:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:37.371 23:30:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:37.371 "name": "raid_bdev1", 00:12:37.371 "uuid": "1333bf72-5477-4cc1-a3da-9548cbfea4d0", 00:12:37.371 "strip_size_kb": 0, 00:12:37.371 "state": "online", 00:12:37.371 "raid_level": "raid1", 00:12:37.371 "superblock": true, 00:12:37.371 "num_base_bdevs": 4, 00:12:37.371 "num_base_bdevs_discovered": 3, 00:12:37.371 "num_base_bdevs_operational": 3, 00:12:37.371 "base_bdevs_list": [ 00:12:37.371 { 00:12:37.371 "name": "spare", 00:12:37.371 "uuid": "24d191ef-ba72-516f-afb5-09961d2b863f", 00:12:37.371 "is_configured": true, 00:12:37.371 "data_offset": 2048, 00:12:37.371 "data_size": 63488 00:12:37.371 }, 00:12:37.371 { 00:12:37.371 "name": null, 00:12:37.371 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:37.371 "is_configured": false, 00:12:37.371 "data_offset": 0, 00:12:37.371 "data_size": 63488 00:12:37.371 }, 00:12:37.371 { 00:12:37.371 "name": "BaseBdev3", 00:12:37.371 "uuid": "7066e3ad-b371-518f-9f3c-398a86390782", 00:12:37.371 "is_configured": true, 00:12:37.371 "data_offset": 2048, 00:12:37.371 "data_size": 63488 00:12:37.371 }, 00:12:37.371 { 00:12:37.371 "name": "BaseBdev4", 00:12:37.371 "uuid": "e02c0b92-3e67-54ad-b993-015294ca32a1", 00:12:37.371 "is_configured": true, 00:12:37.371 "data_offset": 2048, 00:12:37.371 "data_size": 63488 00:12:37.371 } 00:12:37.371 ] 00:12:37.371 }' 00:12:37.371 23:30:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:37.371 23:30:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:12:37.371 23:30:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:37.371 23:30:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:12:37.371 23:30:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:12:37.371 23:30:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:37.371 23:30:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:37.371 23:30:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:37.371 23:30:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:37.371 23:30:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:37.371 23:30:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:37.371 23:30:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:37.371 23:30:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:37.371 23:30:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:37.371 23:30:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:37.371 23:30:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:37.371 "name": "raid_bdev1", 00:12:37.371 "uuid": "1333bf72-5477-4cc1-a3da-9548cbfea4d0", 00:12:37.371 "strip_size_kb": 0, 00:12:37.371 "state": "online", 00:12:37.371 "raid_level": "raid1", 00:12:37.371 "superblock": true, 00:12:37.371 "num_base_bdevs": 4, 00:12:37.371 "num_base_bdevs_discovered": 3, 00:12:37.371 "num_base_bdevs_operational": 3, 00:12:37.371 "base_bdevs_list": [ 00:12:37.371 { 00:12:37.371 "name": "spare", 00:12:37.371 "uuid": "24d191ef-ba72-516f-afb5-09961d2b863f", 00:12:37.371 "is_configured": true, 00:12:37.371 "data_offset": 2048, 00:12:37.371 "data_size": 63488 00:12:37.371 }, 00:12:37.371 { 00:12:37.371 "name": null, 00:12:37.371 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:37.371 "is_configured": false, 00:12:37.371 "data_offset": 0, 00:12:37.371 "data_size": 63488 00:12:37.371 }, 00:12:37.371 { 00:12:37.371 "name": "BaseBdev3", 00:12:37.371 "uuid": "7066e3ad-b371-518f-9f3c-398a86390782", 00:12:37.371 "is_configured": true, 00:12:37.371 "data_offset": 2048, 00:12:37.371 "data_size": 63488 00:12:37.371 }, 00:12:37.371 { 00:12:37.371 "name": "BaseBdev4", 00:12:37.371 "uuid": "e02c0b92-3e67-54ad-b993-015294ca32a1", 00:12:37.371 "is_configured": true, 00:12:37.371 "data_offset": 2048, 00:12:37.371 "data_size": 63488 00:12:37.371 } 00:12:37.371 ] 00:12:37.371 }' 00:12:37.371 23:30:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:37.371 23:30:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:37.371 23:30:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:37.631 23:30:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:37.631 23:30:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:37.631 23:30:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:37.631 23:30:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:37.631 23:30:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:37.631 23:30:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:37.631 23:30:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:37.631 23:30:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:37.631 23:30:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:37.631 23:30:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:37.631 23:30:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:37.631 23:30:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:37.631 23:30:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:37.631 23:30:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:37.631 23:30:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:37.631 23:30:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:37.631 23:30:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:37.631 "name": "raid_bdev1", 00:12:37.631 "uuid": "1333bf72-5477-4cc1-a3da-9548cbfea4d0", 00:12:37.631 "strip_size_kb": 0, 00:12:37.631 "state": "online", 00:12:37.631 "raid_level": "raid1", 00:12:37.631 "superblock": true, 00:12:37.631 "num_base_bdevs": 4, 00:12:37.631 "num_base_bdevs_discovered": 3, 00:12:37.631 "num_base_bdevs_operational": 3, 00:12:37.631 "base_bdevs_list": [ 00:12:37.631 { 00:12:37.631 "name": "spare", 00:12:37.631 "uuid": "24d191ef-ba72-516f-afb5-09961d2b863f", 00:12:37.631 "is_configured": true, 00:12:37.631 "data_offset": 2048, 00:12:37.631 "data_size": 63488 00:12:37.631 }, 00:12:37.631 { 00:12:37.631 "name": null, 00:12:37.631 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:37.631 "is_configured": false, 00:12:37.631 "data_offset": 0, 00:12:37.631 "data_size": 63488 00:12:37.631 }, 00:12:37.631 { 00:12:37.631 "name": "BaseBdev3", 00:12:37.631 "uuid": "7066e3ad-b371-518f-9f3c-398a86390782", 00:12:37.631 "is_configured": true, 00:12:37.631 "data_offset": 2048, 00:12:37.631 "data_size": 63488 00:12:37.631 }, 00:12:37.631 { 00:12:37.631 "name": "BaseBdev4", 00:12:37.631 "uuid": "e02c0b92-3e67-54ad-b993-015294ca32a1", 00:12:37.631 "is_configured": true, 00:12:37.631 "data_offset": 2048, 00:12:37.631 "data_size": 63488 00:12:37.631 } 00:12:37.631 ] 00:12:37.631 }' 00:12:37.631 23:30:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:37.631 23:30:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:37.891 23:30:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:37.891 23:30:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:37.891 23:30:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:37.891 [2024-09-30 23:30:17.723331] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:37.891 [2024-09-30 23:30:17.723371] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:37.891 [2024-09-30 23:30:17.723538] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:37.891 [2024-09-30 23:30:17.723675] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:37.891 [2024-09-30 23:30:17.723690] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:12:37.891 23:30:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:37.891 23:30:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:37.891 23:30:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:37.891 23:30:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:37.891 23:30:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:12:37.891 23:30:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:38.149 23:30:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:12:38.149 23:30:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:12:38.149 23:30:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:12:38.149 23:30:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:12:38.149 23:30:17 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:38.149 23:30:17 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:12:38.149 23:30:17 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:38.149 23:30:17 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:38.149 23:30:17 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:38.149 23:30:17 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:12:38.149 23:30:17 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:38.149 23:30:17 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:38.149 23:30:17 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:12:38.149 /dev/nbd0 00:12:38.409 23:30:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:38.409 23:30:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:38.409 23:30:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:12:38.409 23:30:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:12:38.409 23:30:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:38.409 23:30:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:38.409 23:30:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:12:38.409 23:30:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:12:38.409 23:30:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:38.409 23:30:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:38.409 23:30:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:38.409 1+0 records in 00:12:38.409 1+0 records out 00:12:38.409 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00035913 s, 11.4 MB/s 00:12:38.409 23:30:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:38.409 23:30:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:12:38.409 23:30:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:38.409 23:30:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:38.409 23:30:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:12:38.409 23:30:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:38.409 23:30:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:38.409 23:30:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:12:38.409 /dev/nbd1 00:12:38.669 23:30:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:12:38.669 23:30:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:12:38.669 23:30:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:12:38.669 23:30:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:12:38.669 23:30:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:38.669 23:30:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:38.669 23:30:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:12:38.669 23:30:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:12:38.669 23:30:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:38.669 23:30:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:38.669 23:30:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:38.669 1+0 records in 00:12:38.669 1+0 records out 00:12:38.669 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000225601 s, 18.2 MB/s 00:12:38.669 23:30:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:38.669 23:30:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:12:38.669 23:30:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:38.669 23:30:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:38.669 23:30:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:12:38.669 23:30:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:38.669 23:30:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:38.669 23:30:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:12:38.669 23:30:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:12:38.669 23:30:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:38.669 23:30:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:38.669 23:30:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:38.669 23:30:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:12:38.669 23:30:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:38.669 23:30:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:12:38.929 23:30:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:38.929 23:30:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:38.929 23:30:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:38.929 23:30:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:38.929 23:30:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:38.929 23:30:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:38.929 23:30:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:12:38.929 23:30:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:12:38.929 23:30:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:38.929 23:30:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:12:39.188 23:30:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:12:39.188 23:30:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:12:39.188 23:30:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:12:39.188 23:30:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:39.189 23:30:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:39.189 23:30:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:39.189 23:30:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:12:39.189 23:30:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:12:39.189 23:30:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:12:39.189 23:30:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:12:39.189 23:30:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:39.189 23:30:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:39.189 23:30:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:39.189 23:30:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:12:39.189 23:30:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:39.189 23:30:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:39.189 [2024-09-30 23:30:18.820009] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:12:39.189 [2024-09-30 23:30:18.820072] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:39.189 [2024-09-30 23:30:18.820094] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:12:39.189 [2024-09-30 23:30:18.820109] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:39.189 [2024-09-30 23:30:18.822568] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:39.189 [2024-09-30 23:30:18.822605] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:12:39.189 [2024-09-30 23:30:18.822713] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:12:39.189 [2024-09-30 23:30:18.822772] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:39.189 [2024-09-30 23:30:18.822926] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:39.189 [2024-09-30 23:30:18.823029] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:39.189 spare 00:12:39.189 23:30:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:39.189 23:30:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:12:39.189 23:30:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:39.189 23:30:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:39.189 [2024-09-30 23:30:18.922919] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006600 00:12:39.189 [2024-09-30 23:30:18.923021] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:39.189 [2024-09-30 23:30:18.923364] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1ae0 00:12:39.189 [2024-09-30 23:30:18.923540] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006600 00:12:39.189 [2024-09-30 23:30:18.923552] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006600 00:12:39.189 [2024-09-30 23:30:18.923690] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:39.189 23:30:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:39.189 23:30:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:39.189 23:30:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:39.189 23:30:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:39.189 23:30:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:39.189 23:30:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:39.189 23:30:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:39.189 23:30:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:39.189 23:30:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:39.189 23:30:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:39.189 23:30:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:39.189 23:30:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:39.189 23:30:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:39.189 23:30:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:39.189 23:30:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:39.189 23:30:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:39.189 23:30:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:39.189 "name": "raid_bdev1", 00:12:39.189 "uuid": "1333bf72-5477-4cc1-a3da-9548cbfea4d0", 00:12:39.189 "strip_size_kb": 0, 00:12:39.189 "state": "online", 00:12:39.189 "raid_level": "raid1", 00:12:39.189 "superblock": true, 00:12:39.189 "num_base_bdevs": 4, 00:12:39.189 "num_base_bdevs_discovered": 3, 00:12:39.189 "num_base_bdevs_operational": 3, 00:12:39.189 "base_bdevs_list": [ 00:12:39.189 { 00:12:39.189 "name": "spare", 00:12:39.189 "uuid": "24d191ef-ba72-516f-afb5-09961d2b863f", 00:12:39.189 "is_configured": true, 00:12:39.189 "data_offset": 2048, 00:12:39.189 "data_size": 63488 00:12:39.189 }, 00:12:39.189 { 00:12:39.189 "name": null, 00:12:39.189 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:39.189 "is_configured": false, 00:12:39.189 "data_offset": 2048, 00:12:39.189 "data_size": 63488 00:12:39.189 }, 00:12:39.189 { 00:12:39.189 "name": "BaseBdev3", 00:12:39.189 "uuid": "7066e3ad-b371-518f-9f3c-398a86390782", 00:12:39.189 "is_configured": true, 00:12:39.189 "data_offset": 2048, 00:12:39.189 "data_size": 63488 00:12:39.189 }, 00:12:39.189 { 00:12:39.189 "name": "BaseBdev4", 00:12:39.189 "uuid": "e02c0b92-3e67-54ad-b993-015294ca32a1", 00:12:39.189 "is_configured": true, 00:12:39.189 "data_offset": 2048, 00:12:39.189 "data_size": 63488 00:12:39.189 } 00:12:39.189 ] 00:12:39.189 }' 00:12:39.189 23:30:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:39.189 23:30:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:39.758 23:30:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:39.758 23:30:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:39.758 23:30:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:39.758 23:30:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:39.758 23:30:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:39.758 23:30:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:39.758 23:30:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:39.758 23:30:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:39.758 23:30:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:39.758 23:30:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:39.758 23:30:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:39.758 "name": "raid_bdev1", 00:12:39.758 "uuid": "1333bf72-5477-4cc1-a3da-9548cbfea4d0", 00:12:39.758 "strip_size_kb": 0, 00:12:39.758 "state": "online", 00:12:39.758 "raid_level": "raid1", 00:12:39.758 "superblock": true, 00:12:39.758 "num_base_bdevs": 4, 00:12:39.758 "num_base_bdevs_discovered": 3, 00:12:39.758 "num_base_bdevs_operational": 3, 00:12:39.758 "base_bdevs_list": [ 00:12:39.758 { 00:12:39.758 "name": "spare", 00:12:39.758 "uuid": "24d191ef-ba72-516f-afb5-09961d2b863f", 00:12:39.758 "is_configured": true, 00:12:39.758 "data_offset": 2048, 00:12:39.758 "data_size": 63488 00:12:39.758 }, 00:12:39.758 { 00:12:39.758 "name": null, 00:12:39.758 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:39.758 "is_configured": false, 00:12:39.758 "data_offset": 2048, 00:12:39.758 "data_size": 63488 00:12:39.758 }, 00:12:39.758 { 00:12:39.758 "name": "BaseBdev3", 00:12:39.758 "uuid": "7066e3ad-b371-518f-9f3c-398a86390782", 00:12:39.758 "is_configured": true, 00:12:39.758 "data_offset": 2048, 00:12:39.758 "data_size": 63488 00:12:39.758 }, 00:12:39.758 { 00:12:39.758 "name": "BaseBdev4", 00:12:39.758 "uuid": "e02c0b92-3e67-54ad-b993-015294ca32a1", 00:12:39.758 "is_configured": true, 00:12:39.758 "data_offset": 2048, 00:12:39.758 "data_size": 63488 00:12:39.758 } 00:12:39.758 ] 00:12:39.758 }' 00:12:39.758 23:30:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:39.758 23:30:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:39.758 23:30:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:39.758 23:30:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:39.758 23:30:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:39.758 23:30:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:39.758 23:30:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:39.758 23:30:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:12:39.758 23:30:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:39.758 23:30:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:12:39.758 23:30:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:12:39.758 23:30:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:39.758 23:30:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:39.758 [2024-09-30 23:30:19.534968] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:39.758 23:30:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:39.758 23:30:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:39.758 23:30:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:39.758 23:30:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:39.758 23:30:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:39.758 23:30:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:39.758 23:30:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:39.758 23:30:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:39.758 23:30:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:39.758 23:30:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:39.758 23:30:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:39.758 23:30:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:39.758 23:30:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:39.758 23:30:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:39.758 23:30:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:39.758 23:30:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:39.758 23:30:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:39.758 "name": "raid_bdev1", 00:12:39.758 "uuid": "1333bf72-5477-4cc1-a3da-9548cbfea4d0", 00:12:39.758 "strip_size_kb": 0, 00:12:39.758 "state": "online", 00:12:39.758 "raid_level": "raid1", 00:12:39.758 "superblock": true, 00:12:39.758 "num_base_bdevs": 4, 00:12:39.758 "num_base_bdevs_discovered": 2, 00:12:39.758 "num_base_bdevs_operational": 2, 00:12:39.758 "base_bdevs_list": [ 00:12:39.758 { 00:12:39.758 "name": null, 00:12:39.758 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:39.758 "is_configured": false, 00:12:39.758 "data_offset": 0, 00:12:39.758 "data_size": 63488 00:12:39.758 }, 00:12:39.758 { 00:12:39.758 "name": null, 00:12:39.758 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:39.758 "is_configured": false, 00:12:39.758 "data_offset": 2048, 00:12:39.758 "data_size": 63488 00:12:39.758 }, 00:12:39.758 { 00:12:39.758 "name": "BaseBdev3", 00:12:39.758 "uuid": "7066e3ad-b371-518f-9f3c-398a86390782", 00:12:39.758 "is_configured": true, 00:12:39.758 "data_offset": 2048, 00:12:39.758 "data_size": 63488 00:12:39.758 }, 00:12:39.758 { 00:12:39.758 "name": "BaseBdev4", 00:12:39.758 "uuid": "e02c0b92-3e67-54ad-b993-015294ca32a1", 00:12:39.758 "is_configured": true, 00:12:39.758 "data_offset": 2048, 00:12:39.758 "data_size": 63488 00:12:39.758 } 00:12:39.758 ] 00:12:39.758 }' 00:12:39.758 23:30:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:39.758 23:30:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:40.327 23:30:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:40.327 23:30:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:40.327 23:30:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:40.327 [2024-09-30 23:30:19.982202] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:40.327 [2024-09-30 23:30:19.982415] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:12:40.327 [2024-09-30 23:30:19.982482] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:12:40.327 [2024-09-30 23:30:19.982552] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:40.327 [2024-09-30 23:30:19.988325] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1bb0 00:12:40.327 23:30:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:40.327 23:30:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:12:40.327 [2024-09-30 23:30:19.990448] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:41.265 23:30:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:41.265 23:30:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:41.265 23:30:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:41.265 23:30:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:41.265 23:30:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:41.265 23:30:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:41.265 23:30:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:41.265 23:30:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:41.265 23:30:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:41.265 23:30:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:41.265 23:30:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:41.265 "name": "raid_bdev1", 00:12:41.265 "uuid": "1333bf72-5477-4cc1-a3da-9548cbfea4d0", 00:12:41.265 "strip_size_kb": 0, 00:12:41.265 "state": "online", 00:12:41.265 "raid_level": "raid1", 00:12:41.265 "superblock": true, 00:12:41.265 "num_base_bdevs": 4, 00:12:41.265 "num_base_bdevs_discovered": 3, 00:12:41.265 "num_base_bdevs_operational": 3, 00:12:41.265 "process": { 00:12:41.265 "type": "rebuild", 00:12:41.265 "target": "spare", 00:12:41.265 "progress": { 00:12:41.265 "blocks": 20480, 00:12:41.265 "percent": 32 00:12:41.265 } 00:12:41.265 }, 00:12:41.265 "base_bdevs_list": [ 00:12:41.265 { 00:12:41.265 "name": "spare", 00:12:41.265 "uuid": "24d191ef-ba72-516f-afb5-09961d2b863f", 00:12:41.265 "is_configured": true, 00:12:41.265 "data_offset": 2048, 00:12:41.265 "data_size": 63488 00:12:41.265 }, 00:12:41.265 { 00:12:41.265 "name": null, 00:12:41.265 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:41.265 "is_configured": false, 00:12:41.265 "data_offset": 2048, 00:12:41.265 "data_size": 63488 00:12:41.265 }, 00:12:41.265 { 00:12:41.265 "name": "BaseBdev3", 00:12:41.265 "uuid": "7066e3ad-b371-518f-9f3c-398a86390782", 00:12:41.265 "is_configured": true, 00:12:41.265 "data_offset": 2048, 00:12:41.265 "data_size": 63488 00:12:41.265 }, 00:12:41.265 { 00:12:41.265 "name": "BaseBdev4", 00:12:41.265 "uuid": "e02c0b92-3e67-54ad-b993-015294ca32a1", 00:12:41.265 "is_configured": true, 00:12:41.265 "data_offset": 2048, 00:12:41.265 "data_size": 63488 00:12:41.265 } 00:12:41.265 ] 00:12:41.265 }' 00:12:41.265 23:30:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:41.265 23:30:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:41.265 23:30:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:41.525 23:30:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:41.525 23:30:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:12:41.525 23:30:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:41.525 23:30:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:41.525 [2024-09-30 23:30:21.142403] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:41.525 [2024-09-30 23:30:21.198032] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:12:41.525 [2024-09-30 23:30:21.198093] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:41.525 [2024-09-30 23:30:21.198109] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:41.525 [2024-09-30 23:30:21.198118] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:12:41.525 23:30:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:41.525 23:30:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:41.525 23:30:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:41.525 23:30:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:41.525 23:30:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:41.525 23:30:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:41.525 23:30:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:41.525 23:30:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:41.525 23:30:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:41.525 23:30:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:41.525 23:30:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:41.525 23:30:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:41.525 23:30:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:41.525 23:30:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:41.525 23:30:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:41.525 23:30:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:41.525 23:30:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:41.525 "name": "raid_bdev1", 00:12:41.525 "uuid": "1333bf72-5477-4cc1-a3da-9548cbfea4d0", 00:12:41.525 "strip_size_kb": 0, 00:12:41.525 "state": "online", 00:12:41.525 "raid_level": "raid1", 00:12:41.525 "superblock": true, 00:12:41.525 "num_base_bdevs": 4, 00:12:41.525 "num_base_bdevs_discovered": 2, 00:12:41.525 "num_base_bdevs_operational": 2, 00:12:41.525 "base_bdevs_list": [ 00:12:41.525 { 00:12:41.525 "name": null, 00:12:41.525 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:41.525 "is_configured": false, 00:12:41.525 "data_offset": 0, 00:12:41.525 "data_size": 63488 00:12:41.525 }, 00:12:41.525 { 00:12:41.525 "name": null, 00:12:41.525 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:41.525 "is_configured": false, 00:12:41.525 "data_offset": 2048, 00:12:41.525 "data_size": 63488 00:12:41.525 }, 00:12:41.525 { 00:12:41.525 "name": "BaseBdev3", 00:12:41.525 "uuid": "7066e3ad-b371-518f-9f3c-398a86390782", 00:12:41.525 "is_configured": true, 00:12:41.525 "data_offset": 2048, 00:12:41.525 "data_size": 63488 00:12:41.525 }, 00:12:41.525 { 00:12:41.525 "name": "BaseBdev4", 00:12:41.525 "uuid": "e02c0b92-3e67-54ad-b993-015294ca32a1", 00:12:41.525 "is_configured": true, 00:12:41.525 "data_offset": 2048, 00:12:41.525 "data_size": 63488 00:12:41.525 } 00:12:41.525 ] 00:12:41.525 }' 00:12:41.525 23:30:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:41.525 23:30:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:42.091 23:30:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:12:42.091 23:30:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:42.091 23:30:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:42.091 [2024-09-30 23:30:21.683311] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:12:42.091 [2024-09-30 23:30:21.683434] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:42.091 [2024-09-30 23:30:21.683482] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:12:42.091 [2024-09-30 23:30:21.683515] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:42.091 [2024-09-30 23:30:21.684054] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:42.091 [2024-09-30 23:30:21.684119] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:12:42.091 [2024-09-30 23:30:21.684230] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:12:42.091 [2024-09-30 23:30:21.684278] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:12:42.091 [2024-09-30 23:30:21.684318] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:12:42.091 [2024-09-30 23:30:21.684389] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:42.091 [2024-09-30 23:30:21.688631] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1c80 00:12:42.091 spare 00:12:42.091 23:30:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:42.091 23:30:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:12:42.091 [2024-09-30 23:30:21.690743] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:43.026 23:30:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:43.026 23:30:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:43.026 23:30:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:43.026 23:30:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:43.026 23:30:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:43.026 23:30:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:43.026 23:30:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:43.026 23:30:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:43.026 23:30:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:43.026 23:30:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:43.026 23:30:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:43.026 "name": "raid_bdev1", 00:12:43.026 "uuid": "1333bf72-5477-4cc1-a3da-9548cbfea4d0", 00:12:43.026 "strip_size_kb": 0, 00:12:43.026 "state": "online", 00:12:43.026 "raid_level": "raid1", 00:12:43.026 "superblock": true, 00:12:43.026 "num_base_bdevs": 4, 00:12:43.026 "num_base_bdevs_discovered": 3, 00:12:43.026 "num_base_bdevs_operational": 3, 00:12:43.026 "process": { 00:12:43.026 "type": "rebuild", 00:12:43.026 "target": "spare", 00:12:43.026 "progress": { 00:12:43.026 "blocks": 20480, 00:12:43.026 "percent": 32 00:12:43.026 } 00:12:43.026 }, 00:12:43.026 "base_bdevs_list": [ 00:12:43.026 { 00:12:43.026 "name": "spare", 00:12:43.026 "uuid": "24d191ef-ba72-516f-afb5-09961d2b863f", 00:12:43.026 "is_configured": true, 00:12:43.026 "data_offset": 2048, 00:12:43.026 "data_size": 63488 00:12:43.026 }, 00:12:43.026 { 00:12:43.026 "name": null, 00:12:43.026 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:43.026 "is_configured": false, 00:12:43.026 "data_offset": 2048, 00:12:43.026 "data_size": 63488 00:12:43.026 }, 00:12:43.026 { 00:12:43.026 "name": "BaseBdev3", 00:12:43.026 "uuid": "7066e3ad-b371-518f-9f3c-398a86390782", 00:12:43.026 "is_configured": true, 00:12:43.026 "data_offset": 2048, 00:12:43.026 "data_size": 63488 00:12:43.026 }, 00:12:43.026 { 00:12:43.026 "name": "BaseBdev4", 00:12:43.026 "uuid": "e02c0b92-3e67-54ad-b993-015294ca32a1", 00:12:43.026 "is_configured": true, 00:12:43.026 "data_offset": 2048, 00:12:43.026 "data_size": 63488 00:12:43.026 } 00:12:43.026 ] 00:12:43.026 }' 00:12:43.026 23:30:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:43.026 23:30:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:43.026 23:30:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:43.026 23:30:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:43.026 23:30:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:12:43.026 23:30:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:43.026 23:30:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:43.026 [2024-09-30 23:30:22.850485] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:43.285 [2024-09-30 23:30:22.898247] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:12:43.285 [2024-09-30 23:30:22.898311] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:43.285 [2024-09-30 23:30:22.898330] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:43.285 [2024-09-30 23:30:22.898338] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:12:43.285 23:30:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:43.285 23:30:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:43.285 23:30:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:43.285 23:30:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:43.285 23:30:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:43.285 23:30:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:43.286 23:30:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:43.286 23:30:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:43.286 23:30:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:43.286 23:30:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:43.286 23:30:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:43.286 23:30:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:43.286 23:30:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:43.286 23:30:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:43.286 23:30:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:43.286 23:30:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:43.286 23:30:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:43.286 "name": "raid_bdev1", 00:12:43.286 "uuid": "1333bf72-5477-4cc1-a3da-9548cbfea4d0", 00:12:43.286 "strip_size_kb": 0, 00:12:43.286 "state": "online", 00:12:43.286 "raid_level": "raid1", 00:12:43.286 "superblock": true, 00:12:43.286 "num_base_bdevs": 4, 00:12:43.286 "num_base_bdevs_discovered": 2, 00:12:43.286 "num_base_bdevs_operational": 2, 00:12:43.286 "base_bdevs_list": [ 00:12:43.286 { 00:12:43.286 "name": null, 00:12:43.286 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:43.286 "is_configured": false, 00:12:43.286 "data_offset": 0, 00:12:43.286 "data_size": 63488 00:12:43.286 }, 00:12:43.286 { 00:12:43.286 "name": null, 00:12:43.286 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:43.286 "is_configured": false, 00:12:43.286 "data_offset": 2048, 00:12:43.286 "data_size": 63488 00:12:43.286 }, 00:12:43.286 { 00:12:43.286 "name": "BaseBdev3", 00:12:43.286 "uuid": "7066e3ad-b371-518f-9f3c-398a86390782", 00:12:43.286 "is_configured": true, 00:12:43.286 "data_offset": 2048, 00:12:43.286 "data_size": 63488 00:12:43.286 }, 00:12:43.286 { 00:12:43.286 "name": "BaseBdev4", 00:12:43.286 "uuid": "e02c0b92-3e67-54ad-b993-015294ca32a1", 00:12:43.286 "is_configured": true, 00:12:43.286 "data_offset": 2048, 00:12:43.286 "data_size": 63488 00:12:43.286 } 00:12:43.286 ] 00:12:43.286 }' 00:12:43.286 23:30:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:43.286 23:30:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:43.545 23:30:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:43.545 23:30:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:43.545 23:30:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:43.545 23:30:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:43.545 23:30:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:43.545 23:30:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:43.545 23:30:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:43.545 23:30:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:43.545 23:30:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:43.545 23:30:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:43.545 23:30:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:43.545 "name": "raid_bdev1", 00:12:43.545 "uuid": "1333bf72-5477-4cc1-a3da-9548cbfea4d0", 00:12:43.545 "strip_size_kb": 0, 00:12:43.545 "state": "online", 00:12:43.545 "raid_level": "raid1", 00:12:43.545 "superblock": true, 00:12:43.545 "num_base_bdevs": 4, 00:12:43.545 "num_base_bdevs_discovered": 2, 00:12:43.545 "num_base_bdevs_operational": 2, 00:12:43.545 "base_bdevs_list": [ 00:12:43.545 { 00:12:43.545 "name": null, 00:12:43.545 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:43.545 "is_configured": false, 00:12:43.545 "data_offset": 0, 00:12:43.545 "data_size": 63488 00:12:43.545 }, 00:12:43.545 { 00:12:43.545 "name": null, 00:12:43.545 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:43.545 "is_configured": false, 00:12:43.545 "data_offset": 2048, 00:12:43.545 "data_size": 63488 00:12:43.545 }, 00:12:43.545 { 00:12:43.545 "name": "BaseBdev3", 00:12:43.545 "uuid": "7066e3ad-b371-518f-9f3c-398a86390782", 00:12:43.545 "is_configured": true, 00:12:43.545 "data_offset": 2048, 00:12:43.545 "data_size": 63488 00:12:43.545 }, 00:12:43.545 { 00:12:43.545 "name": "BaseBdev4", 00:12:43.545 "uuid": "e02c0b92-3e67-54ad-b993-015294ca32a1", 00:12:43.545 "is_configured": true, 00:12:43.545 "data_offset": 2048, 00:12:43.545 "data_size": 63488 00:12:43.545 } 00:12:43.545 ] 00:12:43.545 }' 00:12:43.545 23:30:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:43.804 23:30:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:43.804 23:30:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:43.804 23:30:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:43.804 23:30:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:12:43.804 23:30:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:43.804 23:30:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:43.804 23:30:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:43.804 23:30:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:12:43.804 23:30:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:43.804 23:30:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:43.804 [2024-09-30 23:30:23.507188] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:12:43.804 [2024-09-30 23:30:23.507254] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:43.804 [2024-09-30 23:30:23.507298] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c980 00:12:43.804 [2024-09-30 23:30:23.507308] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:43.804 [2024-09-30 23:30:23.507800] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:43.804 [2024-09-30 23:30:23.507818] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:43.804 [2024-09-30 23:30:23.507909] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:12:43.804 [2024-09-30 23:30:23.507933] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:12:43.804 [2024-09-30 23:30:23.507946] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:12:43.804 [2024-09-30 23:30:23.507962] bdev_raid.c:3884:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:12:43.804 BaseBdev1 00:12:43.804 23:30:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:43.804 23:30:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:12:44.739 23:30:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:44.739 23:30:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:44.739 23:30:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:44.739 23:30:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:44.739 23:30:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:44.739 23:30:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:44.739 23:30:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:44.739 23:30:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:44.739 23:30:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:44.739 23:30:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:44.739 23:30:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:44.739 23:30:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:44.739 23:30:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:44.739 23:30:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:44.739 23:30:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:44.739 23:30:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:44.739 "name": "raid_bdev1", 00:12:44.739 "uuid": "1333bf72-5477-4cc1-a3da-9548cbfea4d0", 00:12:44.739 "strip_size_kb": 0, 00:12:44.739 "state": "online", 00:12:44.739 "raid_level": "raid1", 00:12:44.739 "superblock": true, 00:12:44.739 "num_base_bdevs": 4, 00:12:44.739 "num_base_bdevs_discovered": 2, 00:12:44.739 "num_base_bdevs_operational": 2, 00:12:44.739 "base_bdevs_list": [ 00:12:44.739 { 00:12:44.739 "name": null, 00:12:44.739 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:44.739 "is_configured": false, 00:12:44.739 "data_offset": 0, 00:12:44.739 "data_size": 63488 00:12:44.739 }, 00:12:44.739 { 00:12:44.739 "name": null, 00:12:44.739 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:44.739 "is_configured": false, 00:12:44.739 "data_offset": 2048, 00:12:44.739 "data_size": 63488 00:12:44.739 }, 00:12:44.739 { 00:12:44.739 "name": "BaseBdev3", 00:12:44.739 "uuid": "7066e3ad-b371-518f-9f3c-398a86390782", 00:12:44.739 "is_configured": true, 00:12:44.739 "data_offset": 2048, 00:12:44.739 "data_size": 63488 00:12:44.739 }, 00:12:44.739 { 00:12:44.739 "name": "BaseBdev4", 00:12:44.739 "uuid": "e02c0b92-3e67-54ad-b993-015294ca32a1", 00:12:44.739 "is_configured": true, 00:12:44.739 "data_offset": 2048, 00:12:44.739 "data_size": 63488 00:12:44.739 } 00:12:44.739 ] 00:12:44.739 }' 00:12:44.739 23:30:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:44.739 23:30:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:45.307 23:30:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:45.307 23:30:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:45.307 23:30:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:45.307 23:30:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:45.307 23:30:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:45.307 23:30:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:45.307 23:30:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:45.307 23:30:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:45.307 23:30:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:45.307 23:30:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:45.307 23:30:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:45.307 "name": "raid_bdev1", 00:12:45.307 "uuid": "1333bf72-5477-4cc1-a3da-9548cbfea4d0", 00:12:45.307 "strip_size_kb": 0, 00:12:45.307 "state": "online", 00:12:45.307 "raid_level": "raid1", 00:12:45.307 "superblock": true, 00:12:45.307 "num_base_bdevs": 4, 00:12:45.307 "num_base_bdevs_discovered": 2, 00:12:45.307 "num_base_bdevs_operational": 2, 00:12:45.307 "base_bdevs_list": [ 00:12:45.307 { 00:12:45.307 "name": null, 00:12:45.307 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:45.307 "is_configured": false, 00:12:45.307 "data_offset": 0, 00:12:45.307 "data_size": 63488 00:12:45.307 }, 00:12:45.307 { 00:12:45.307 "name": null, 00:12:45.307 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:45.307 "is_configured": false, 00:12:45.307 "data_offset": 2048, 00:12:45.307 "data_size": 63488 00:12:45.307 }, 00:12:45.307 { 00:12:45.307 "name": "BaseBdev3", 00:12:45.307 "uuid": "7066e3ad-b371-518f-9f3c-398a86390782", 00:12:45.307 "is_configured": true, 00:12:45.307 "data_offset": 2048, 00:12:45.307 "data_size": 63488 00:12:45.307 }, 00:12:45.307 { 00:12:45.307 "name": "BaseBdev4", 00:12:45.307 "uuid": "e02c0b92-3e67-54ad-b993-015294ca32a1", 00:12:45.307 "is_configured": true, 00:12:45.307 "data_offset": 2048, 00:12:45.307 "data_size": 63488 00:12:45.307 } 00:12:45.307 ] 00:12:45.307 }' 00:12:45.307 23:30:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:45.307 23:30:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:45.307 23:30:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:45.308 23:30:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:45.308 23:30:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:12:45.308 23:30:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@650 -- # local es=0 00:12:45.308 23:30:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:12:45.308 23:30:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:12:45.308 23:30:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:45.308 23:30:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:12:45.308 23:30:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:45.308 23:30:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:12:45.308 23:30:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:45.308 23:30:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:45.308 [2024-09-30 23:30:25.036563] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:45.308 [2024-09-30 23:30:25.036766] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:12:45.308 [2024-09-30 23:30:25.036820] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:12:45.308 request: 00:12:45.308 { 00:12:45.308 "base_bdev": "BaseBdev1", 00:12:45.308 "raid_bdev": "raid_bdev1", 00:12:45.308 "method": "bdev_raid_add_base_bdev", 00:12:45.308 "req_id": 1 00:12:45.308 } 00:12:45.308 Got JSON-RPC error response 00:12:45.308 response: 00:12:45.308 { 00:12:45.308 "code": -22, 00:12:45.308 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:12:45.308 } 00:12:45.308 23:30:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:12:45.308 23:30:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@653 -- # es=1 00:12:45.308 23:30:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:45.308 23:30:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:45.308 23:30:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:45.308 23:30:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:12:46.246 23:30:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:46.246 23:30:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:46.246 23:30:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:46.246 23:30:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:46.246 23:30:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:46.246 23:30:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:46.246 23:30:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:46.246 23:30:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:46.247 23:30:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:46.247 23:30:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:46.247 23:30:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:46.247 23:30:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:46.247 23:30:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.247 23:30:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:46.247 23:30:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.247 23:30:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:46.247 "name": "raid_bdev1", 00:12:46.247 "uuid": "1333bf72-5477-4cc1-a3da-9548cbfea4d0", 00:12:46.247 "strip_size_kb": 0, 00:12:46.247 "state": "online", 00:12:46.247 "raid_level": "raid1", 00:12:46.247 "superblock": true, 00:12:46.247 "num_base_bdevs": 4, 00:12:46.247 "num_base_bdevs_discovered": 2, 00:12:46.247 "num_base_bdevs_operational": 2, 00:12:46.247 "base_bdevs_list": [ 00:12:46.247 { 00:12:46.247 "name": null, 00:12:46.247 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:46.247 "is_configured": false, 00:12:46.247 "data_offset": 0, 00:12:46.247 "data_size": 63488 00:12:46.247 }, 00:12:46.247 { 00:12:46.247 "name": null, 00:12:46.247 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:46.247 "is_configured": false, 00:12:46.247 "data_offset": 2048, 00:12:46.247 "data_size": 63488 00:12:46.247 }, 00:12:46.247 { 00:12:46.247 "name": "BaseBdev3", 00:12:46.247 "uuid": "7066e3ad-b371-518f-9f3c-398a86390782", 00:12:46.247 "is_configured": true, 00:12:46.247 "data_offset": 2048, 00:12:46.247 "data_size": 63488 00:12:46.247 }, 00:12:46.247 { 00:12:46.247 "name": "BaseBdev4", 00:12:46.247 "uuid": "e02c0b92-3e67-54ad-b993-015294ca32a1", 00:12:46.247 "is_configured": true, 00:12:46.247 "data_offset": 2048, 00:12:46.247 "data_size": 63488 00:12:46.247 } 00:12:46.247 ] 00:12:46.247 }' 00:12:46.247 23:30:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:46.247 23:30:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:46.816 23:30:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:46.816 23:30:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:46.816 23:30:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:46.816 23:30:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:46.816 23:30:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:46.816 23:30:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:46.816 23:30:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.816 23:30:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:46.816 23:30:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:46.816 23:30:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.816 23:30:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:46.816 "name": "raid_bdev1", 00:12:46.816 "uuid": "1333bf72-5477-4cc1-a3da-9548cbfea4d0", 00:12:46.816 "strip_size_kb": 0, 00:12:46.816 "state": "online", 00:12:46.816 "raid_level": "raid1", 00:12:46.816 "superblock": true, 00:12:46.816 "num_base_bdevs": 4, 00:12:46.816 "num_base_bdevs_discovered": 2, 00:12:46.816 "num_base_bdevs_operational": 2, 00:12:46.816 "base_bdevs_list": [ 00:12:46.816 { 00:12:46.816 "name": null, 00:12:46.816 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:46.816 "is_configured": false, 00:12:46.816 "data_offset": 0, 00:12:46.816 "data_size": 63488 00:12:46.816 }, 00:12:46.816 { 00:12:46.816 "name": null, 00:12:46.816 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:46.816 "is_configured": false, 00:12:46.816 "data_offset": 2048, 00:12:46.816 "data_size": 63488 00:12:46.816 }, 00:12:46.816 { 00:12:46.816 "name": "BaseBdev3", 00:12:46.816 "uuid": "7066e3ad-b371-518f-9f3c-398a86390782", 00:12:46.816 "is_configured": true, 00:12:46.816 "data_offset": 2048, 00:12:46.816 "data_size": 63488 00:12:46.816 }, 00:12:46.816 { 00:12:46.816 "name": "BaseBdev4", 00:12:46.816 "uuid": "e02c0b92-3e67-54ad-b993-015294ca32a1", 00:12:46.816 "is_configured": true, 00:12:46.816 "data_offset": 2048, 00:12:46.816 "data_size": 63488 00:12:46.816 } 00:12:46.816 ] 00:12:46.816 }' 00:12:46.816 23:30:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:46.816 23:30:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:46.816 23:30:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:46.816 23:30:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:46.816 23:30:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 88654 00:12:46.816 23:30:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@950 -- # '[' -z 88654 ']' 00:12:46.816 23:30:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@954 -- # kill -0 88654 00:12:46.816 23:30:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@955 -- # uname 00:12:46.816 23:30:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:46.816 23:30:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 88654 00:12:46.816 23:30:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:46.816 killing process with pid 88654 00:12:46.816 Received shutdown signal, test time was about 60.000000 seconds 00:12:46.816 00:12:46.816 Latency(us) 00:12:46.816 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:46.816 =================================================================================================================== 00:12:46.816 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:12:46.816 23:30:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:46.816 23:30:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 88654' 00:12:46.816 23:30:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@969 -- # kill 88654 00:12:46.816 [2024-09-30 23:30:26.613201] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:46.816 [2024-09-30 23:30:26.613306] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:46.816 [2024-09-30 23:30:26.613365] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:46.816 [2024-09-30 23:30:26.613378] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state offline 00:12:46.816 23:30:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@974 -- # wait 88654 00:12:47.076 [2024-09-30 23:30:26.705441] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:47.336 23:30:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:12:47.336 ************************************ 00:12:47.336 END TEST raid_rebuild_test_sb 00:12:47.336 ************************************ 00:12:47.336 00:12:47.336 real 0m23.345s 00:12:47.336 user 0m28.233s 00:12:47.336 sys 0m3.943s 00:12:47.336 23:30:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:47.336 23:30:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:47.336 23:30:27 bdev_raid -- bdev/bdev_raid.sh@980 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 4 false true true 00:12:47.336 23:30:27 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:12:47.336 23:30:27 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:47.336 23:30:27 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:47.336 ************************************ 00:12:47.336 START TEST raid_rebuild_test_io 00:12:47.336 ************************************ 00:12:47.336 23:30:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 4 false true true 00:12:47.336 23:30:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:12:47.336 23:30:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:12:47.336 23:30:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:12:47.336 23:30:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:12:47.336 23:30:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:12:47.336 23:30:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:12:47.336 23:30:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:47.336 23:30:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:12:47.336 23:30:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:47.336 23:30:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:47.336 23:30:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:12:47.336 23:30:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:47.336 23:30:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:47.336 23:30:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:12:47.336 23:30:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:47.336 23:30:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:47.336 23:30:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:12:47.336 23:30:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:47.336 23:30:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:47.336 23:30:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:12:47.336 23:30:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:12:47.336 23:30:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:12:47.336 23:30:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:12:47.336 23:30:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:12:47.336 23:30:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:12:47.336 23:30:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:12:47.336 23:30:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:12:47.336 23:30:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:12:47.336 23:30:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:12:47.336 23:30:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@597 -- # raid_pid=89391 00:12:47.336 23:30:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:12:47.336 23:30:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 89391 00:12:47.336 23:30:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@831 -- # '[' -z 89391 ']' 00:12:47.336 23:30:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:47.336 23:30:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:47.336 23:30:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:47.336 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:47.336 23:30:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:47.336 23:30:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:47.596 I/O size of 3145728 is greater than zero copy threshold (65536). 00:12:47.596 Zero copy mechanism will not be used. 00:12:47.596 [2024-09-30 23:30:27.240170] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:12:47.596 [2024-09-30 23:30:27.240390] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89391 ] 00:12:47.596 [2024-09-30 23:30:27.399572] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:47.855 [2024-09-30 23:30:27.467762] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:12:47.855 [2024-09-30 23:30:27.543569] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:47.856 [2024-09-30 23:30:27.543678] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:48.427 23:30:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:48.427 23:30:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@864 -- # return 0 00:12:48.427 23:30:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:48.427 23:30:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:48.427 23:30:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.427 23:30:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:48.427 BaseBdev1_malloc 00:12:48.427 23:30:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.427 23:30:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:12:48.427 23:30:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.427 23:30:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:48.427 [2024-09-30 23:30:28.093571] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:12:48.427 [2024-09-30 23:30:28.093643] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:48.427 [2024-09-30 23:30:28.093671] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:12:48.427 [2024-09-30 23:30:28.093686] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:48.427 [2024-09-30 23:30:28.096130] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:48.427 [2024-09-30 23:30:28.096167] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:48.427 BaseBdev1 00:12:48.427 23:30:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.427 23:30:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:48.427 23:30:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:48.427 23:30:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.427 23:30:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:48.427 BaseBdev2_malloc 00:12:48.427 23:30:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.427 23:30:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:12:48.427 23:30:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.427 23:30:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:48.427 [2024-09-30 23:30:28.149246] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:12:48.427 [2024-09-30 23:30:28.149486] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:48.427 [2024-09-30 23:30:28.149546] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:12:48.427 [2024-09-30 23:30:28.149571] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:48.427 [2024-09-30 23:30:28.153827] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:48.427 [2024-09-30 23:30:28.153898] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:48.427 BaseBdev2 00:12:48.427 23:30:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.427 23:30:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:48.427 23:30:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:12:48.427 23:30:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.427 23:30:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:48.427 BaseBdev3_malloc 00:12:48.427 23:30:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.428 23:30:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:12:48.428 23:30:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.428 23:30:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:48.428 [2024-09-30 23:30:28.184790] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:12:48.428 [2024-09-30 23:30:28.184927] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:48.428 [2024-09-30 23:30:28.184959] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:12:48.428 [2024-09-30 23:30:28.184968] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:48.428 [2024-09-30 23:30:28.187387] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:48.428 [2024-09-30 23:30:28.187423] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:12:48.428 BaseBdev3 00:12:48.428 23:30:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.428 23:30:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:48.428 23:30:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:12:48.428 23:30:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.428 23:30:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:48.428 BaseBdev4_malloc 00:12:48.428 23:30:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.428 23:30:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:12:48.428 23:30:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.428 23:30:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:48.428 [2024-09-30 23:30:28.219290] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:12:48.428 [2024-09-30 23:30:28.219424] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:48.428 [2024-09-30 23:30:28.219456] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:12:48.428 [2024-09-30 23:30:28.219464] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:48.428 [2024-09-30 23:30:28.221801] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:48.428 [2024-09-30 23:30:28.221837] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:12:48.428 BaseBdev4 00:12:48.428 23:30:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.428 23:30:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:12:48.428 23:30:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.428 23:30:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:48.428 spare_malloc 00:12:48.428 23:30:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.428 23:30:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:12:48.428 23:30:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.428 23:30:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:48.428 spare_delay 00:12:48.428 23:30:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.428 23:30:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:12:48.428 23:30:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.428 23:30:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:48.428 [2024-09-30 23:30:28.265692] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:12:48.428 [2024-09-30 23:30:28.265743] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:48.428 [2024-09-30 23:30:28.265765] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:12:48.428 [2024-09-30 23:30:28.265773] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:48.428 [2024-09-30 23:30:28.268115] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:48.428 [2024-09-30 23:30:28.268151] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:12:48.428 spare 00:12:48.428 23:30:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.428 23:30:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:12:48.428 23:30:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.428 23:30:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:48.428 [2024-09-30 23:30:28.277779] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:48.690 [2024-09-30 23:30:28.279901] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:48.690 [2024-09-30 23:30:28.279975] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:48.690 [2024-09-30 23:30:28.280019] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:48.690 [2024-09-30 23:30:28.280099] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:12:48.690 [2024-09-30 23:30:28.280110] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:12:48.690 [2024-09-30 23:30:28.280355] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:12:48.690 [2024-09-30 23:30:28.280498] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:12:48.690 [2024-09-30 23:30:28.280511] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:12:48.690 [2024-09-30 23:30:28.280638] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:48.690 23:30:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.690 23:30:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:12:48.690 23:30:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:48.690 23:30:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:48.690 23:30:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:48.690 23:30:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:48.690 23:30:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:48.690 23:30:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:48.690 23:30:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:48.690 23:30:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:48.690 23:30:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:48.690 23:30:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:48.690 23:30:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.690 23:30:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:48.690 23:30:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:48.690 23:30:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.690 23:30:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:48.690 "name": "raid_bdev1", 00:12:48.690 "uuid": "11c4c530-b06c-4f81-bf28-7a29f6012f37", 00:12:48.690 "strip_size_kb": 0, 00:12:48.690 "state": "online", 00:12:48.690 "raid_level": "raid1", 00:12:48.690 "superblock": false, 00:12:48.690 "num_base_bdevs": 4, 00:12:48.690 "num_base_bdevs_discovered": 4, 00:12:48.690 "num_base_bdevs_operational": 4, 00:12:48.690 "base_bdevs_list": [ 00:12:48.690 { 00:12:48.690 "name": "BaseBdev1", 00:12:48.690 "uuid": "cf6c53fd-8973-5356-8ade-d83d7ebdc52f", 00:12:48.690 "is_configured": true, 00:12:48.690 "data_offset": 0, 00:12:48.690 "data_size": 65536 00:12:48.690 }, 00:12:48.690 { 00:12:48.690 "name": "BaseBdev2", 00:12:48.690 "uuid": "c1d4acc4-c7cf-504e-a841-c8e468afc463", 00:12:48.690 "is_configured": true, 00:12:48.690 "data_offset": 0, 00:12:48.690 "data_size": 65536 00:12:48.690 }, 00:12:48.690 { 00:12:48.690 "name": "BaseBdev3", 00:12:48.690 "uuid": "b71d460f-6215-54be-babc-a3f0d9d761ee", 00:12:48.690 "is_configured": true, 00:12:48.690 "data_offset": 0, 00:12:48.690 "data_size": 65536 00:12:48.690 }, 00:12:48.690 { 00:12:48.690 "name": "BaseBdev4", 00:12:48.690 "uuid": "1dc399cd-4c9b-5661-b3b4-1869d81826ec", 00:12:48.690 "is_configured": true, 00:12:48.690 "data_offset": 0, 00:12:48.690 "data_size": 65536 00:12:48.690 } 00:12:48.690 ] 00:12:48.690 }' 00:12:48.690 23:30:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:48.690 23:30:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:48.950 23:30:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:48.950 23:30:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.950 23:30:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:48.950 [2024-09-30 23:30:28.745229] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:48.950 23:30:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:12:48.950 23:30:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.950 23:30:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:12:48.950 23:30:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:12:48.950 23:30:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:48.950 23:30:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.950 23:30:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:49.210 23:30:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.210 23:30:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:12:49.210 23:30:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:12:49.210 23:30:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:12:49.210 23:30:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:12:49.210 23:30:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.210 23:30:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:49.210 [2024-09-30 23:30:28.848751] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:49.210 23:30:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.210 23:30:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:49.210 23:30:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:49.210 23:30:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:49.210 23:30:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:49.210 23:30:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:49.210 23:30:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:49.210 23:30:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:49.210 23:30:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:49.210 23:30:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:49.210 23:30:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:49.210 23:30:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:49.210 23:30:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.210 23:30:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:49.210 23:30:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:49.210 23:30:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.210 23:30:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:49.210 "name": "raid_bdev1", 00:12:49.210 "uuid": "11c4c530-b06c-4f81-bf28-7a29f6012f37", 00:12:49.210 "strip_size_kb": 0, 00:12:49.210 "state": "online", 00:12:49.210 "raid_level": "raid1", 00:12:49.210 "superblock": false, 00:12:49.210 "num_base_bdevs": 4, 00:12:49.210 "num_base_bdevs_discovered": 3, 00:12:49.210 "num_base_bdevs_operational": 3, 00:12:49.210 "base_bdevs_list": [ 00:12:49.210 { 00:12:49.210 "name": null, 00:12:49.210 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:49.210 "is_configured": false, 00:12:49.210 "data_offset": 0, 00:12:49.210 "data_size": 65536 00:12:49.210 }, 00:12:49.210 { 00:12:49.210 "name": "BaseBdev2", 00:12:49.210 "uuid": "c1d4acc4-c7cf-504e-a841-c8e468afc463", 00:12:49.210 "is_configured": true, 00:12:49.210 "data_offset": 0, 00:12:49.210 "data_size": 65536 00:12:49.210 }, 00:12:49.210 { 00:12:49.210 "name": "BaseBdev3", 00:12:49.210 "uuid": "b71d460f-6215-54be-babc-a3f0d9d761ee", 00:12:49.210 "is_configured": true, 00:12:49.210 "data_offset": 0, 00:12:49.210 "data_size": 65536 00:12:49.210 }, 00:12:49.210 { 00:12:49.210 "name": "BaseBdev4", 00:12:49.210 "uuid": "1dc399cd-4c9b-5661-b3b4-1869d81826ec", 00:12:49.210 "is_configured": true, 00:12:49.210 "data_offset": 0, 00:12:49.210 "data_size": 65536 00:12:49.210 } 00:12:49.210 ] 00:12:49.210 }' 00:12:49.210 23:30:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:49.210 23:30:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:49.210 [2024-09-30 23:30:28.936038] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:12:49.210 I/O size of 3145728 is greater than zero copy threshold (65536). 00:12:49.210 Zero copy mechanism will not be used. 00:12:49.210 Running I/O for 60 seconds... 00:12:49.469 23:30:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:49.469 23:30:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.469 23:30:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:49.469 [2024-09-30 23:30:29.250600] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:49.469 23:30:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.469 23:30:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:12:49.469 [2024-09-30 23:30:29.312016] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:12:49.469 [2024-09-30 23:30:29.314378] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:49.729 [2024-09-30 23:30:29.452702] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:12:49.987 [2024-09-30 23:30:29.689471] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:12:49.987 [2024-09-30 23:30:29.690624] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:12:50.246 168.00 IOPS, 504.00 MiB/s [2024-09-30 23:30:30.040296] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:12:50.246 [2024-09-30 23:30:30.041189] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:12:50.506 [2024-09-30 23:30:30.247002] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:12:50.506 23:30:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:50.506 23:30:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:50.506 23:30:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:50.506 23:30:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:50.506 23:30:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:50.506 23:30:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:50.506 23:30:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:50.506 23:30:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:50.506 23:30:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:50.506 23:30:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:50.506 23:30:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:50.506 "name": "raid_bdev1", 00:12:50.506 "uuid": "11c4c530-b06c-4f81-bf28-7a29f6012f37", 00:12:50.506 "strip_size_kb": 0, 00:12:50.506 "state": "online", 00:12:50.506 "raid_level": "raid1", 00:12:50.506 "superblock": false, 00:12:50.506 "num_base_bdevs": 4, 00:12:50.506 "num_base_bdevs_discovered": 4, 00:12:50.506 "num_base_bdevs_operational": 4, 00:12:50.506 "process": { 00:12:50.506 "type": "rebuild", 00:12:50.506 "target": "spare", 00:12:50.506 "progress": { 00:12:50.506 "blocks": 10240, 00:12:50.506 "percent": 15 00:12:50.506 } 00:12:50.506 }, 00:12:50.506 "base_bdevs_list": [ 00:12:50.506 { 00:12:50.506 "name": "spare", 00:12:50.506 "uuid": "9b1ce018-46ec-52ab-a6db-27a0735d1c6d", 00:12:50.506 "is_configured": true, 00:12:50.506 "data_offset": 0, 00:12:50.506 "data_size": 65536 00:12:50.506 }, 00:12:50.506 { 00:12:50.506 "name": "BaseBdev2", 00:12:50.506 "uuid": "c1d4acc4-c7cf-504e-a841-c8e468afc463", 00:12:50.506 "is_configured": true, 00:12:50.506 "data_offset": 0, 00:12:50.506 "data_size": 65536 00:12:50.506 }, 00:12:50.506 { 00:12:50.506 "name": "BaseBdev3", 00:12:50.506 "uuid": "b71d460f-6215-54be-babc-a3f0d9d761ee", 00:12:50.506 "is_configured": true, 00:12:50.506 "data_offset": 0, 00:12:50.506 "data_size": 65536 00:12:50.506 }, 00:12:50.506 { 00:12:50.506 "name": "BaseBdev4", 00:12:50.506 "uuid": "1dc399cd-4c9b-5661-b3b4-1869d81826ec", 00:12:50.506 "is_configured": true, 00:12:50.506 "data_offset": 0, 00:12:50.506 "data_size": 65536 00:12:50.506 } 00:12:50.506 ] 00:12:50.506 }' 00:12:50.506 23:30:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:50.766 23:30:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:50.766 23:30:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:50.766 23:30:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:50.766 23:30:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:12:50.766 23:30:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:50.766 23:30:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:50.766 [2024-09-30 23:30:30.454092] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:50.766 [2024-09-30 23:30:30.592197] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:12:50.766 [2024-09-30 23:30:30.611281] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:50.766 [2024-09-30 23:30:30.611400] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:50.766 [2024-09-30 23:30:30.611419] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:12:51.025 [2024-09-30 23:30:30.638010] bdev_raid.c:1970:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006080 00:12:51.025 23:30:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:51.025 23:30:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:51.025 23:30:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:51.025 23:30:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:51.025 23:30:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:51.025 23:30:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:51.025 23:30:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:51.025 23:30:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:51.025 23:30:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:51.025 23:30:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:51.025 23:30:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:51.025 23:30:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:51.025 23:30:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:51.025 23:30:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:51.025 23:30:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:51.025 23:30:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:51.025 23:30:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:51.025 "name": "raid_bdev1", 00:12:51.025 "uuid": "11c4c530-b06c-4f81-bf28-7a29f6012f37", 00:12:51.025 "strip_size_kb": 0, 00:12:51.025 "state": "online", 00:12:51.025 "raid_level": "raid1", 00:12:51.025 "superblock": false, 00:12:51.025 "num_base_bdevs": 4, 00:12:51.025 "num_base_bdevs_discovered": 3, 00:12:51.025 "num_base_bdevs_operational": 3, 00:12:51.025 "base_bdevs_list": [ 00:12:51.025 { 00:12:51.025 "name": null, 00:12:51.025 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:51.025 "is_configured": false, 00:12:51.025 "data_offset": 0, 00:12:51.025 "data_size": 65536 00:12:51.025 }, 00:12:51.025 { 00:12:51.025 "name": "BaseBdev2", 00:12:51.025 "uuid": "c1d4acc4-c7cf-504e-a841-c8e468afc463", 00:12:51.025 "is_configured": true, 00:12:51.025 "data_offset": 0, 00:12:51.025 "data_size": 65536 00:12:51.025 }, 00:12:51.025 { 00:12:51.025 "name": "BaseBdev3", 00:12:51.025 "uuid": "b71d460f-6215-54be-babc-a3f0d9d761ee", 00:12:51.025 "is_configured": true, 00:12:51.025 "data_offset": 0, 00:12:51.025 "data_size": 65536 00:12:51.025 }, 00:12:51.025 { 00:12:51.025 "name": "BaseBdev4", 00:12:51.025 "uuid": "1dc399cd-4c9b-5661-b3b4-1869d81826ec", 00:12:51.025 "is_configured": true, 00:12:51.025 "data_offset": 0, 00:12:51.025 "data_size": 65536 00:12:51.025 } 00:12:51.025 ] 00:12:51.025 }' 00:12:51.025 23:30:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:51.025 23:30:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:51.285 149.50 IOPS, 448.50 MiB/s 23:30:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:51.285 23:30:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:51.285 23:30:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:51.285 23:30:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:51.285 23:30:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:51.285 23:30:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:51.285 23:30:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:51.285 23:30:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:51.285 23:30:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:51.285 23:30:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:51.285 23:30:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:51.285 "name": "raid_bdev1", 00:12:51.285 "uuid": "11c4c530-b06c-4f81-bf28-7a29f6012f37", 00:12:51.285 "strip_size_kb": 0, 00:12:51.285 "state": "online", 00:12:51.285 "raid_level": "raid1", 00:12:51.285 "superblock": false, 00:12:51.285 "num_base_bdevs": 4, 00:12:51.285 "num_base_bdevs_discovered": 3, 00:12:51.285 "num_base_bdevs_operational": 3, 00:12:51.285 "base_bdevs_list": [ 00:12:51.285 { 00:12:51.285 "name": null, 00:12:51.285 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:51.285 "is_configured": false, 00:12:51.285 "data_offset": 0, 00:12:51.285 "data_size": 65536 00:12:51.285 }, 00:12:51.285 { 00:12:51.285 "name": "BaseBdev2", 00:12:51.285 "uuid": "c1d4acc4-c7cf-504e-a841-c8e468afc463", 00:12:51.285 "is_configured": true, 00:12:51.285 "data_offset": 0, 00:12:51.285 "data_size": 65536 00:12:51.285 }, 00:12:51.285 { 00:12:51.285 "name": "BaseBdev3", 00:12:51.285 "uuid": "b71d460f-6215-54be-babc-a3f0d9d761ee", 00:12:51.285 "is_configured": true, 00:12:51.285 "data_offset": 0, 00:12:51.285 "data_size": 65536 00:12:51.285 }, 00:12:51.285 { 00:12:51.285 "name": "BaseBdev4", 00:12:51.285 "uuid": "1dc399cd-4c9b-5661-b3b4-1869d81826ec", 00:12:51.285 "is_configured": true, 00:12:51.285 "data_offset": 0, 00:12:51.285 "data_size": 65536 00:12:51.285 } 00:12:51.285 ] 00:12:51.285 }' 00:12:51.285 23:30:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:51.544 23:30:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:51.544 23:30:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:51.544 23:30:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:51.544 23:30:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:51.544 23:30:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:51.544 23:30:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:51.544 [2024-09-30 23:30:31.200052] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:51.544 23:30:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:51.544 23:30:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:12:51.544 [2024-09-30 23:30:31.263151] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:12:51.544 [2024-09-30 23:30:31.265444] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:51.544 [2024-09-30 23:30:31.382382] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:12:51.544 [2024-09-30 23:30:31.382926] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:12:51.804 [2024-09-30 23:30:31.627114] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:12:52.063 [2024-09-30 23:30:31.857427] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:12:52.322 152.33 IOPS, 457.00 MiB/s [2024-09-30 23:30:31.965044] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:12:52.581 [2024-09-30 23:30:32.203683] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:12:52.581 23:30:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:52.581 23:30:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:52.581 23:30:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:52.581 23:30:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:52.581 23:30:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:52.581 23:30:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:52.581 23:30:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.581 23:30:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:52.581 23:30:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:52.581 23:30:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.581 23:30:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:52.581 "name": "raid_bdev1", 00:12:52.581 "uuid": "11c4c530-b06c-4f81-bf28-7a29f6012f37", 00:12:52.581 "strip_size_kb": 0, 00:12:52.581 "state": "online", 00:12:52.581 "raid_level": "raid1", 00:12:52.581 "superblock": false, 00:12:52.581 "num_base_bdevs": 4, 00:12:52.581 "num_base_bdevs_discovered": 4, 00:12:52.581 "num_base_bdevs_operational": 4, 00:12:52.581 "process": { 00:12:52.581 "type": "rebuild", 00:12:52.581 "target": "spare", 00:12:52.581 "progress": { 00:12:52.581 "blocks": 14336, 00:12:52.581 "percent": 21 00:12:52.581 } 00:12:52.581 }, 00:12:52.581 "base_bdevs_list": [ 00:12:52.581 { 00:12:52.581 "name": "spare", 00:12:52.581 "uuid": "9b1ce018-46ec-52ab-a6db-27a0735d1c6d", 00:12:52.581 "is_configured": true, 00:12:52.581 "data_offset": 0, 00:12:52.581 "data_size": 65536 00:12:52.581 }, 00:12:52.581 { 00:12:52.581 "name": "BaseBdev2", 00:12:52.581 "uuid": "c1d4acc4-c7cf-504e-a841-c8e468afc463", 00:12:52.581 "is_configured": true, 00:12:52.581 "data_offset": 0, 00:12:52.581 "data_size": 65536 00:12:52.581 }, 00:12:52.581 { 00:12:52.581 "name": "BaseBdev3", 00:12:52.581 "uuid": "b71d460f-6215-54be-babc-a3f0d9d761ee", 00:12:52.581 "is_configured": true, 00:12:52.581 "data_offset": 0, 00:12:52.581 "data_size": 65536 00:12:52.581 }, 00:12:52.581 { 00:12:52.581 "name": "BaseBdev4", 00:12:52.581 "uuid": "1dc399cd-4c9b-5661-b3b4-1869d81826ec", 00:12:52.581 "is_configured": true, 00:12:52.581 "data_offset": 0, 00:12:52.581 "data_size": 65536 00:12:52.581 } 00:12:52.581 ] 00:12:52.581 }' 00:12:52.581 23:30:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:52.581 [2024-09-30 23:30:32.319676] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:12:52.581 [2024-09-30 23:30:32.320773] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:12:52.581 23:30:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:52.581 23:30:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:52.581 23:30:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:52.581 23:30:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:12:52.581 23:30:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:12:52.581 23:30:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:12:52.581 23:30:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:12:52.581 23:30:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:12:52.581 23:30:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.581 23:30:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:52.581 [2024-09-30 23:30:32.385206] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:52.840 [2024-09-30 23:30:32.564966] bdev_raid.c:1970:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000006080 00:12:52.840 [2024-09-30 23:30:32.565018] bdev_raid.c:1970:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000006220 00:12:52.840 23:30:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.840 23:30:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:12:52.840 23:30:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:12:52.840 23:30:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:52.840 23:30:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:52.841 23:30:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:52.841 23:30:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:52.841 23:30:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:52.841 23:30:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:52.841 23:30:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:52.841 23:30:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.841 23:30:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:52.841 23:30:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.841 23:30:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:52.841 "name": "raid_bdev1", 00:12:52.841 "uuid": "11c4c530-b06c-4f81-bf28-7a29f6012f37", 00:12:52.841 "strip_size_kb": 0, 00:12:52.841 "state": "online", 00:12:52.841 "raid_level": "raid1", 00:12:52.841 "superblock": false, 00:12:52.841 "num_base_bdevs": 4, 00:12:52.841 "num_base_bdevs_discovered": 3, 00:12:52.841 "num_base_bdevs_operational": 3, 00:12:52.841 "process": { 00:12:52.841 "type": "rebuild", 00:12:52.841 "target": "spare", 00:12:52.841 "progress": { 00:12:52.841 "blocks": 18432, 00:12:52.841 "percent": 28 00:12:52.841 } 00:12:52.841 }, 00:12:52.841 "base_bdevs_list": [ 00:12:52.841 { 00:12:52.841 "name": "spare", 00:12:52.841 "uuid": "9b1ce018-46ec-52ab-a6db-27a0735d1c6d", 00:12:52.841 "is_configured": true, 00:12:52.841 "data_offset": 0, 00:12:52.841 "data_size": 65536 00:12:52.841 }, 00:12:52.841 { 00:12:52.841 "name": null, 00:12:52.841 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:52.841 "is_configured": false, 00:12:52.841 "data_offset": 0, 00:12:52.841 "data_size": 65536 00:12:52.841 }, 00:12:52.841 { 00:12:52.841 "name": "BaseBdev3", 00:12:52.841 "uuid": "b71d460f-6215-54be-babc-a3f0d9d761ee", 00:12:52.841 "is_configured": true, 00:12:52.841 "data_offset": 0, 00:12:52.841 "data_size": 65536 00:12:52.841 }, 00:12:52.841 { 00:12:52.841 "name": "BaseBdev4", 00:12:52.841 "uuid": "1dc399cd-4c9b-5661-b3b4-1869d81826ec", 00:12:52.841 "is_configured": true, 00:12:52.841 "data_offset": 0, 00:12:52.841 "data_size": 65536 00:12:52.841 } 00:12:52.841 ] 00:12:52.841 }' 00:12:52.841 23:30:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:52.841 23:30:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:52.841 23:30:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:53.100 [2024-09-30 23:30:32.705586] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:12:53.100 23:30:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:53.100 23:30:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # local timeout=393 00:12:53.100 23:30:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:53.100 23:30:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:53.100 23:30:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:53.100 23:30:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:53.100 23:30:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:53.100 23:30:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:53.100 23:30:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:53.100 23:30:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:53.100 23:30:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:53.100 23:30:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:53.100 23:30:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:53.100 23:30:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:53.100 "name": "raid_bdev1", 00:12:53.100 "uuid": "11c4c530-b06c-4f81-bf28-7a29f6012f37", 00:12:53.100 "strip_size_kb": 0, 00:12:53.100 "state": "online", 00:12:53.100 "raid_level": "raid1", 00:12:53.100 "superblock": false, 00:12:53.100 "num_base_bdevs": 4, 00:12:53.100 "num_base_bdevs_discovered": 3, 00:12:53.100 "num_base_bdevs_operational": 3, 00:12:53.100 "process": { 00:12:53.100 "type": "rebuild", 00:12:53.100 "target": "spare", 00:12:53.100 "progress": { 00:12:53.100 "blocks": 20480, 00:12:53.100 "percent": 31 00:12:53.100 } 00:12:53.100 }, 00:12:53.100 "base_bdevs_list": [ 00:12:53.100 { 00:12:53.100 "name": "spare", 00:12:53.100 "uuid": "9b1ce018-46ec-52ab-a6db-27a0735d1c6d", 00:12:53.100 "is_configured": true, 00:12:53.100 "data_offset": 0, 00:12:53.100 "data_size": 65536 00:12:53.100 }, 00:12:53.100 { 00:12:53.100 "name": null, 00:12:53.100 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:53.100 "is_configured": false, 00:12:53.100 "data_offset": 0, 00:12:53.100 "data_size": 65536 00:12:53.100 }, 00:12:53.100 { 00:12:53.100 "name": "BaseBdev3", 00:12:53.100 "uuid": "b71d460f-6215-54be-babc-a3f0d9d761ee", 00:12:53.100 "is_configured": true, 00:12:53.100 "data_offset": 0, 00:12:53.100 "data_size": 65536 00:12:53.100 }, 00:12:53.100 { 00:12:53.100 "name": "BaseBdev4", 00:12:53.100 "uuid": "1dc399cd-4c9b-5661-b3b4-1869d81826ec", 00:12:53.100 "is_configured": true, 00:12:53.100 "data_offset": 0, 00:12:53.100 "data_size": 65536 00:12:53.100 } 00:12:53.100 ] 00:12:53.100 }' 00:12:53.100 23:30:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:53.100 23:30:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:53.100 23:30:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:53.100 23:30:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:53.100 23:30:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:53.359 134.50 IOPS, 403.50 MiB/s [2024-09-30 23:30:33.162960] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:12:53.359 [2024-09-30 23:30:33.164331] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:12:53.618 [2024-09-30 23:30:33.381994] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:12:53.618 [2024-09-30 23:30:33.382218] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:12:54.187 23:30:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:54.187 23:30:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:54.187 23:30:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:54.187 23:30:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:54.187 23:30:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:54.187 23:30:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:54.187 23:30:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:54.187 23:30:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:54.187 23:30:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:54.187 23:30:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:54.187 23:30:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:54.187 23:30:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:54.187 "name": "raid_bdev1", 00:12:54.187 "uuid": "11c4c530-b06c-4f81-bf28-7a29f6012f37", 00:12:54.187 "strip_size_kb": 0, 00:12:54.187 "state": "online", 00:12:54.187 "raid_level": "raid1", 00:12:54.187 "superblock": false, 00:12:54.187 "num_base_bdevs": 4, 00:12:54.187 "num_base_bdevs_discovered": 3, 00:12:54.187 "num_base_bdevs_operational": 3, 00:12:54.187 "process": { 00:12:54.187 "type": "rebuild", 00:12:54.187 "target": "spare", 00:12:54.187 "progress": { 00:12:54.187 "blocks": 34816, 00:12:54.187 "percent": 53 00:12:54.187 } 00:12:54.187 }, 00:12:54.187 "base_bdevs_list": [ 00:12:54.187 { 00:12:54.187 "name": "spare", 00:12:54.187 "uuid": "9b1ce018-46ec-52ab-a6db-27a0735d1c6d", 00:12:54.187 "is_configured": true, 00:12:54.187 "data_offset": 0, 00:12:54.187 "data_size": 65536 00:12:54.187 }, 00:12:54.187 { 00:12:54.187 "name": null, 00:12:54.187 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:54.187 "is_configured": false, 00:12:54.187 "data_offset": 0, 00:12:54.187 "data_size": 65536 00:12:54.187 }, 00:12:54.187 { 00:12:54.187 "name": "BaseBdev3", 00:12:54.187 "uuid": "b71d460f-6215-54be-babc-a3f0d9d761ee", 00:12:54.187 "is_configured": true, 00:12:54.187 "data_offset": 0, 00:12:54.187 "data_size": 65536 00:12:54.187 }, 00:12:54.187 { 00:12:54.187 "name": "BaseBdev4", 00:12:54.187 "uuid": "1dc399cd-4c9b-5661-b3b4-1869d81826ec", 00:12:54.187 "is_configured": true, 00:12:54.187 "data_offset": 0, 00:12:54.187 "data_size": 65536 00:12:54.187 } 00:12:54.187 ] 00:12:54.187 }' 00:12:54.187 23:30:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:54.187 23:30:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:54.187 23:30:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:54.187 114.60 IOPS, 343.80 MiB/s [2024-09-30 23:30:33.966045] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:12:54.187 23:30:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:54.188 23:30:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:54.447 [2024-09-30 23:30:34.181608] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:12:55.015 [2024-09-30 23:30:34.642172] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:12:55.276 102.00 IOPS, 306.00 MiB/s [2024-09-30 23:30:34.958412] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 51200 offset_begin: 49152 offset_end: 55296 00:12:55.276 23:30:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:55.276 23:30:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:55.276 23:30:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:55.276 23:30:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:55.276 23:30:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:55.276 23:30:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:55.276 23:30:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:55.276 23:30:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:55.276 23:30:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:55.276 23:30:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:55.276 23:30:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:55.276 23:30:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:55.276 "name": "raid_bdev1", 00:12:55.276 "uuid": "11c4c530-b06c-4f81-bf28-7a29f6012f37", 00:12:55.276 "strip_size_kb": 0, 00:12:55.276 "state": "online", 00:12:55.276 "raid_level": "raid1", 00:12:55.276 "superblock": false, 00:12:55.276 "num_base_bdevs": 4, 00:12:55.276 "num_base_bdevs_discovered": 3, 00:12:55.276 "num_base_bdevs_operational": 3, 00:12:55.276 "process": { 00:12:55.276 "type": "rebuild", 00:12:55.276 "target": "spare", 00:12:55.276 "progress": { 00:12:55.276 "blocks": 51200, 00:12:55.276 "percent": 78 00:12:55.276 } 00:12:55.276 }, 00:12:55.276 "base_bdevs_list": [ 00:12:55.276 { 00:12:55.276 "name": "spare", 00:12:55.276 "uuid": "9b1ce018-46ec-52ab-a6db-27a0735d1c6d", 00:12:55.276 "is_configured": true, 00:12:55.276 "data_offset": 0, 00:12:55.276 "data_size": 65536 00:12:55.276 }, 00:12:55.276 { 00:12:55.276 "name": null, 00:12:55.276 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:55.276 "is_configured": false, 00:12:55.276 "data_offset": 0, 00:12:55.276 "data_size": 65536 00:12:55.276 }, 00:12:55.276 { 00:12:55.276 "name": "BaseBdev3", 00:12:55.276 "uuid": "b71d460f-6215-54be-babc-a3f0d9d761ee", 00:12:55.276 "is_configured": true, 00:12:55.276 "data_offset": 0, 00:12:55.276 "data_size": 65536 00:12:55.276 }, 00:12:55.276 { 00:12:55.276 "name": "BaseBdev4", 00:12:55.276 "uuid": "1dc399cd-4c9b-5661-b3b4-1869d81826ec", 00:12:55.276 "is_configured": true, 00:12:55.276 "data_offset": 0, 00:12:55.276 "data_size": 65536 00:12:55.276 } 00:12:55.276 ] 00:12:55.276 }' 00:12:55.276 23:30:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:55.276 [2024-09-30 23:30:35.072412] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296 00:12:55.276 23:30:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:55.276 23:30:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:55.276 23:30:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:55.276 23:30:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:56.216 [2024-09-30 23:30:35.843097] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:12:56.216 92.14 IOPS, 276.43 MiB/s [2024-09-30 23:30:35.942886] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:12:56.216 [2024-09-30 23:30:35.946969] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:56.475 23:30:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:56.475 23:30:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:56.475 23:30:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:56.475 23:30:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:56.475 23:30:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:56.475 23:30:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:56.475 23:30:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:56.475 23:30:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:56.475 23:30:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:56.475 23:30:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:56.475 23:30:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:56.475 23:30:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:56.475 "name": "raid_bdev1", 00:12:56.475 "uuid": "11c4c530-b06c-4f81-bf28-7a29f6012f37", 00:12:56.475 "strip_size_kb": 0, 00:12:56.475 "state": "online", 00:12:56.475 "raid_level": "raid1", 00:12:56.475 "superblock": false, 00:12:56.475 "num_base_bdevs": 4, 00:12:56.475 "num_base_bdevs_discovered": 3, 00:12:56.475 "num_base_bdevs_operational": 3, 00:12:56.475 "base_bdevs_list": [ 00:12:56.475 { 00:12:56.475 "name": "spare", 00:12:56.475 "uuid": "9b1ce018-46ec-52ab-a6db-27a0735d1c6d", 00:12:56.475 "is_configured": true, 00:12:56.475 "data_offset": 0, 00:12:56.475 "data_size": 65536 00:12:56.475 }, 00:12:56.475 { 00:12:56.475 "name": null, 00:12:56.475 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:56.475 "is_configured": false, 00:12:56.475 "data_offset": 0, 00:12:56.475 "data_size": 65536 00:12:56.475 }, 00:12:56.475 { 00:12:56.475 "name": "BaseBdev3", 00:12:56.475 "uuid": "b71d460f-6215-54be-babc-a3f0d9d761ee", 00:12:56.475 "is_configured": true, 00:12:56.475 "data_offset": 0, 00:12:56.475 "data_size": 65536 00:12:56.475 }, 00:12:56.475 { 00:12:56.475 "name": "BaseBdev4", 00:12:56.475 "uuid": "1dc399cd-4c9b-5661-b3b4-1869d81826ec", 00:12:56.475 "is_configured": true, 00:12:56.475 "data_offset": 0, 00:12:56.475 "data_size": 65536 00:12:56.475 } 00:12:56.475 ] 00:12:56.475 }' 00:12:56.475 23:30:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:56.475 23:30:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:12:56.475 23:30:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:56.475 23:30:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:12:56.475 23:30:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@709 -- # break 00:12:56.475 23:30:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:56.475 23:30:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:56.475 23:30:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:56.475 23:30:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:56.475 23:30:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:56.475 23:30:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:56.475 23:30:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:56.475 23:30:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:56.475 23:30:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:56.475 23:30:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:56.475 23:30:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:56.475 "name": "raid_bdev1", 00:12:56.475 "uuid": "11c4c530-b06c-4f81-bf28-7a29f6012f37", 00:12:56.475 "strip_size_kb": 0, 00:12:56.475 "state": "online", 00:12:56.475 "raid_level": "raid1", 00:12:56.475 "superblock": false, 00:12:56.475 "num_base_bdevs": 4, 00:12:56.475 "num_base_bdevs_discovered": 3, 00:12:56.475 "num_base_bdevs_operational": 3, 00:12:56.475 "base_bdevs_list": [ 00:12:56.475 { 00:12:56.475 "name": "spare", 00:12:56.475 "uuid": "9b1ce018-46ec-52ab-a6db-27a0735d1c6d", 00:12:56.475 "is_configured": true, 00:12:56.475 "data_offset": 0, 00:12:56.475 "data_size": 65536 00:12:56.475 }, 00:12:56.475 { 00:12:56.475 "name": null, 00:12:56.475 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:56.475 "is_configured": false, 00:12:56.475 "data_offset": 0, 00:12:56.475 "data_size": 65536 00:12:56.475 }, 00:12:56.475 { 00:12:56.475 "name": "BaseBdev3", 00:12:56.475 "uuid": "b71d460f-6215-54be-babc-a3f0d9d761ee", 00:12:56.475 "is_configured": true, 00:12:56.475 "data_offset": 0, 00:12:56.475 "data_size": 65536 00:12:56.475 }, 00:12:56.475 { 00:12:56.475 "name": "BaseBdev4", 00:12:56.475 "uuid": "1dc399cd-4c9b-5661-b3b4-1869d81826ec", 00:12:56.475 "is_configured": true, 00:12:56.475 "data_offset": 0, 00:12:56.475 "data_size": 65536 00:12:56.475 } 00:12:56.475 ] 00:12:56.475 }' 00:12:56.475 23:30:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:56.735 23:30:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:56.735 23:30:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:56.735 23:30:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:56.735 23:30:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:56.735 23:30:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:56.735 23:30:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:56.735 23:30:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:56.735 23:30:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:56.735 23:30:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:56.735 23:30:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:56.735 23:30:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:56.735 23:30:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:56.735 23:30:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:56.735 23:30:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:56.735 23:30:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:56.735 23:30:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:56.735 23:30:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:56.735 23:30:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:56.735 23:30:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:56.735 "name": "raid_bdev1", 00:12:56.735 "uuid": "11c4c530-b06c-4f81-bf28-7a29f6012f37", 00:12:56.735 "strip_size_kb": 0, 00:12:56.735 "state": "online", 00:12:56.735 "raid_level": "raid1", 00:12:56.735 "superblock": false, 00:12:56.735 "num_base_bdevs": 4, 00:12:56.735 "num_base_bdevs_discovered": 3, 00:12:56.735 "num_base_bdevs_operational": 3, 00:12:56.735 "base_bdevs_list": [ 00:12:56.735 { 00:12:56.735 "name": "spare", 00:12:56.735 "uuid": "9b1ce018-46ec-52ab-a6db-27a0735d1c6d", 00:12:56.735 "is_configured": true, 00:12:56.735 "data_offset": 0, 00:12:56.735 "data_size": 65536 00:12:56.735 }, 00:12:56.735 { 00:12:56.735 "name": null, 00:12:56.735 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:56.735 "is_configured": false, 00:12:56.735 "data_offset": 0, 00:12:56.735 "data_size": 65536 00:12:56.735 }, 00:12:56.735 { 00:12:56.735 "name": "BaseBdev3", 00:12:56.735 "uuid": "b71d460f-6215-54be-babc-a3f0d9d761ee", 00:12:56.735 "is_configured": true, 00:12:56.735 "data_offset": 0, 00:12:56.735 "data_size": 65536 00:12:56.735 }, 00:12:56.735 { 00:12:56.735 "name": "BaseBdev4", 00:12:56.735 "uuid": "1dc399cd-4c9b-5661-b3b4-1869d81826ec", 00:12:56.735 "is_configured": true, 00:12:56.735 "data_offset": 0, 00:12:56.735 "data_size": 65536 00:12:56.735 } 00:12:56.735 ] 00:12:56.735 }' 00:12:56.735 23:30:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:56.735 23:30:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:56.995 23:30:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:56.995 23:30:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:56.995 23:30:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:56.995 [2024-09-30 23:30:36.754280] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:56.995 [2024-09-30 23:30:36.754320] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:56.995 00:12:56.995 Latency(us) 00:12:56.995 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:56.995 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:12:56.995 raid_bdev1 : 7.89 85.12 255.35 0.00 0.00 16418.66 295.13 114473.36 00:12:56.995 =================================================================================================================== 00:12:56.995 Total : 85.12 255.35 0.00 0.00 16418.66 295.13 114473.36 00:12:56.995 [2024-09-30 23:30:36.821633] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:56.995 [2024-09-30 23:30:36.821675] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:56.995 [2024-09-30 23:30:36.821813] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:56.995 [2024-09-30 23:30:36.821836] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:12:56.995 { 00:12:56.995 "results": [ 00:12:56.995 { 00:12:56.995 "job": "raid_bdev1", 00:12:56.995 "core_mask": "0x1", 00:12:56.995 "workload": "randrw", 00:12:56.995 "percentage": 50, 00:12:56.995 "status": "finished", 00:12:56.995 "queue_depth": 2, 00:12:56.995 "io_size": 3145728, 00:12:56.995 "runtime": 7.894952, 00:12:56.995 "iops": 85.11768025948733, 00:12:56.995 "mibps": 255.353040778462, 00:12:56.995 "io_failed": 0, 00:12:56.995 "io_timeout": 0, 00:12:56.995 "avg_latency_us": 16418.657683510086, 00:12:56.995 "min_latency_us": 295.12663755458516, 00:12:56.995 "max_latency_us": 114473.36244541485 00:12:56.995 } 00:12:56.995 ], 00:12:56.995 "core_count": 1 00:12:56.995 } 00:12:56.995 23:30:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:56.995 23:30:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:56.995 23:30:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:56.995 23:30:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:56.995 23:30:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # jq length 00:12:56.995 23:30:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.254 23:30:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:12:57.254 23:30:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:12:57.254 23:30:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:12:57.254 23:30:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:12:57.254 23:30:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:57.254 23:30:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:12:57.254 23:30:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:57.255 23:30:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:12:57.255 23:30:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:57.255 23:30:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:12:57.255 23:30:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:57.255 23:30:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:57.255 23:30:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:12:57.255 /dev/nbd0 00:12:57.255 23:30:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:57.255 23:30:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:57.255 23:30:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:12:57.255 23:30:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # local i 00:12:57.255 23:30:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:57.255 23:30:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:57.255 23:30:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:12:57.255 23:30:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # break 00:12:57.255 23:30:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:57.255 23:30:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:57.255 23:30:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:57.255 1+0 records in 00:12:57.255 1+0 records out 00:12:57.255 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000350124 s, 11.7 MB/s 00:12:57.255 23:30:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:57.255 23:30:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # size=4096 00:12:57.255 23:30:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:57.514 23:30:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:57.514 23:30:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # return 0 00:12:57.514 23:30:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:57.514 23:30:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:57.514 23:30:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:12:57.514 23:30:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z '' ']' 00:12:57.514 23:30:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@728 -- # continue 00:12:57.514 23:30:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:12:57.514 23:30:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev3 ']' 00:12:57.514 23:30:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev3 /dev/nbd1 00:12:57.514 23:30:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:57.514 23:30:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:12:57.514 23:30:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:57.514 23:30:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:12:57.514 23:30:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:57.514 23:30:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:12:57.514 23:30:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:57.514 23:30:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:57.514 23:30:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:12:57.514 /dev/nbd1 00:12:57.514 23:30:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:12:57.514 23:30:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:12:57.514 23:30:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:12:57.514 23:30:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # local i 00:12:57.514 23:30:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:57.514 23:30:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:57.514 23:30:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:12:57.514 23:30:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # break 00:12:57.514 23:30:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:57.514 23:30:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:57.514 23:30:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:57.514 1+0 records in 00:12:57.514 1+0 records out 00:12:57.514 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000273216 s, 15.0 MB/s 00:12:57.514 23:30:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:57.514 23:30:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # size=4096 00:12:57.514 23:30:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:57.514 23:30:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:57.514 23:30:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # return 0 00:12:57.514 23:30:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:57.514 23:30:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:57.514 23:30:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:12:57.774 23:30:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:12:57.774 23:30:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:57.774 23:30:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:12:57.774 23:30:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:57.774 23:30:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:12:57.774 23:30:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:57.774 23:30:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:12:58.033 23:30:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:12:58.033 23:30:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:12:58.033 23:30:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:12:58.033 23:30:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:58.033 23:30:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:58.033 23:30:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:58.033 23:30:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:12:58.033 23:30:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:12:58.033 23:30:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:12:58.033 23:30:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev4 ']' 00:12:58.033 23:30:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev4 /dev/nbd1 00:12:58.033 23:30:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:58.033 23:30:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:12:58.033 23:30:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:58.033 23:30:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:12:58.033 23:30:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:58.033 23:30:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:12:58.033 23:30:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:58.033 23:30:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:58.033 23:30:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:12:58.293 /dev/nbd1 00:12:58.293 23:30:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:12:58.293 23:30:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:12:58.293 23:30:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:12:58.293 23:30:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # local i 00:12:58.293 23:30:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:58.293 23:30:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:58.293 23:30:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:12:58.293 23:30:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # break 00:12:58.293 23:30:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:58.293 23:30:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:58.293 23:30:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:58.293 1+0 records in 00:12:58.293 1+0 records out 00:12:58.293 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000236056 s, 17.4 MB/s 00:12:58.293 23:30:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:58.293 23:30:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # size=4096 00:12:58.293 23:30:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:58.293 23:30:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:58.293 23:30:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # return 0 00:12:58.293 23:30:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:58.293 23:30:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:58.293 23:30:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:12:58.293 23:30:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:12:58.293 23:30:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:58.293 23:30:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:12:58.293 23:30:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:58.293 23:30:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:12:58.293 23:30:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:58.293 23:30:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:12:58.553 23:30:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:12:58.553 23:30:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:12:58.553 23:30:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:12:58.553 23:30:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:58.553 23:30:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:58.553 23:30:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:58.553 23:30:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:12:58.553 23:30:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:12:58.553 23:30:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:12:58.553 23:30:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:58.553 23:30:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:12:58.553 23:30:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:58.553 23:30:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:12:58.553 23:30:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:58.553 23:30:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:12:58.812 23:30:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:58.812 23:30:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:58.812 23:30:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:58.812 23:30:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:58.812 23:30:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:58.812 23:30:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:58.812 23:30:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:12:58.812 23:30:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:12:58.812 23:30:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:12:58.812 23:30:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@784 -- # killprocess 89391 00:12:58.812 23:30:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@950 -- # '[' -z 89391 ']' 00:12:58.812 23:30:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@954 -- # kill -0 89391 00:12:58.812 23:30:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@955 -- # uname 00:12:58.812 23:30:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:58.812 23:30:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 89391 00:12:58.812 23:30:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:58.812 23:30:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:58.812 killing process with pid 89391 00:12:58.812 23:30:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@968 -- # echo 'killing process with pid 89391' 00:12:58.812 23:30:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@969 -- # kill 89391 00:12:58.812 Received shutdown signal, test time was about 9.546156 seconds 00:12:58.812 00:12:58.812 Latency(us) 00:12:58.812 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:58.812 =================================================================================================================== 00:12:58.812 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:12:58.812 [2024-09-30 23:30:38.466095] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:58.812 23:30:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@974 -- # wait 89391 00:12:58.812 [2024-09-30 23:30:38.548286] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:59.071 23:30:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@786 -- # return 0 00:12:59.071 00:12:59.071 real 0m11.779s 00:12:59.071 user 0m14.930s 00:12:59.071 sys 0m1.886s 00:12:59.071 23:30:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:59.071 23:30:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:59.071 ************************************ 00:12:59.071 END TEST raid_rebuild_test_io 00:12:59.071 ************************************ 00:12:59.331 23:30:38 bdev_raid -- bdev/bdev_raid.sh@981 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 4 true true true 00:12:59.331 23:30:38 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:12:59.331 23:30:38 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:59.331 23:30:38 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:59.331 ************************************ 00:12:59.331 START TEST raid_rebuild_test_sb_io 00:12:59.331 ************************************ 00:12:59.331 23:30:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 4 true true true 00:12:59.331 23:30:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:12:59.331 23:30:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:12:59.331 23:30:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:12:59.331 23:30:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:12:59.331 23:30:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:12:59.331 23:30:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:12:59.331 23:30:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:59.331 23:30:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:12:59.331 23:30:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:59.331 23:30:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:59.331 23:30:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:12:59.331 23:30:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:59.331 23:30:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:59.331 23:30:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:12:59.331 23:30:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:59.331 23:30:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:59.331 23:30:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:12:59.331 23:30:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:59.331 23:30:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:59.331 23:30:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:12:59.331 23:30:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:12:59.331 23:30:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:12:59.331 23:30:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:12:59.331 23:30:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:12:59.331 23:30:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:12:59.331 23:30:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:12:59.331 23:30:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:12:59.331 23:30:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:12:59.331 23:30:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:12:59.331 23:30:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:12:59.331 23:30:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@597 -- # raid_pid=89791 00:12:59.331 23:30:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 89791 00:12:59.331 23:30:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:12:59.331 23:30:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@831 -- # '[' -z 89791 ']' 00:12:59.331 23:30:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:59.331 23:30:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:59.331 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:59.331 23:30:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:59.331 23:30:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:59.331 23:30:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:59.331 I/O size of 3145728 is greater than zero copy threshold (65536). 00:12:59.331 Zero copy mechanism will not be used. 00:12:59.331 [2024-09-30 23:30:39.107617] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:12:59.332 [2024-09-30 23:30:39.107752] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89791 ] 00:12:59.591 [2024-09-30 23:30:39.269649] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:59.591 [2024-09-30 23:30:39.341771] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:12:59.591 [2024-09-30 23:30:39.417936] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:59.591 [2024-09-30 23:30:39.417975] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:00.160 23:30:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:00.160 23:30:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@864 -- # return 0 00:13:00.160 23:30:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:00.160 23:30:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:00.160 23:30:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.160 23:30:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:00.160 BaseBdev1_malloc 00:13:00.160 23:30:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.161 23:30:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:13:00.161 23:30:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.161 23:30:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:00.161 [2024-09-30 23:30:39.964478] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:13:00.161 [2024-09-30 23:30:39.964574] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:00.161 [2024-09-30 23:30:39.964608] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:13:00.161 [2024-09-30 23:30:39.964632] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:00.161 [2024-09-30 23:30:39.967032] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:00.161 [2024-09-30 23:30:39.967063] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:00.161 BaseBdev1 00:13:00.161 23:30:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.161 23:30:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:00.161 23:30:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:00.161 23:30:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.161 23:30:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:00.161 BaseBdev2_malloc 00:13:00.161 23:30:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.161 23:30:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:13:00.161 23:30:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.161 23:30:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:00.421 [2024-09-30 23:30:40.013581] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:13:00.421 [2024-09-30 23:30:40.013689] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:00.421 [2024-09-30 23:30:40.013734] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:13:00.421 [2024-09-30 23:30:40.013755] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:00.421 [2024-09-30 23:30:40.018520] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:00.421 [2024-09-30 23:30:40.018585] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:00.421 BaseBdev2 00:13:00.421 23:30:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.421 23:30:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:00.421 23:30:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:13:00.421 23:30:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.421 23:30:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:00.421 BaseBdev3_malloc 00:13:00.421 23:30:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.421 23:30:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:13:00.421 23:30:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.421 23:30:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:00.421 [2024-09-30 23:30:40.051077] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:13:00.421 [2024-09-30 23:30:40.051123] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:00.421 [2024-09-30 23:30:40.051151] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:13:00.421 [2024-09-30 23:30:40.051160] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:00.421 [2024-09-30 23:30:40.053569] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:00.421 [2024-09-30 23:30:40.053600] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:13:00.421 BaseBdev3 00:13:00.421 23:30:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.421 23:30:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:00.421 23:30:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:13:00.421 23:30:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.421 23:30:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:00.421 BaseBdev4_malloc 00:13:00.421 23:30:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.421 23:30:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:13:00.421 23:30:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.421 23:30:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:00.421 [2024-09-30 23:30:40.085727] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:13:00.421 [2024-09-30 23:30:40.085780] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:00.421 [2024-09-30 23:30:40.085804] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:13:00.421 [2024-09-30 23:30:40.085812] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:00.421 [2024-09-30 23:30:40.088134] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:00.421 [2024-09-30 23:30:40.088164] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:13:00.421 BaseBdev4 00:13:00.421 23:30:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.421 23:30:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:13:00.421 23:30:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.421 23:30:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:00.421 spare_malloc 00:13:00.421 23:30:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.421 23:30:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:13:00.421 23:30:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.421 23:30:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:00.421 spare_delay 00:13:00.421 23:30:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.421 23:30:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:00.421 23:30:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.421 23:30:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:00.421 [2024-09-30 23:30:40.132264] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:00.421 [2024-09-30 23:30:40.132315] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:00.421 [2024-09-30 23:30:40.132337] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:13:00.421 [2024-09-30 23:30:40.132345] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:00.421 [2024-09-30 23:30:40.134611] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:00.421 [2024-09-30 23:30:40.134642] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:00.421 spare 00:13:00.421 23:30:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.421 23:30:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:13:00.421 23:30:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.421 23:30:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:00.422 [2024-09-30 23:30:40.144350] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:00.422 [2024-09-30 23:30:40.146420] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:00.422 [2024-09-30 23:30:40.146489] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:00.422 [2024-09-30 23:30:40.146531] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:00.422 [2024-09-30 23:30:40.146710] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:13:00.422 [2024-09-30 23:30:40.146737] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:00.422 [2024-09-30 23:30:40.147009] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:13:00.422 [2024-09-30 23:30:40.147178] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:13:00.422 [2024-09-30 23:30:40.147198] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:13:00.422 [2024-09-30 23:30:40.147332] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:00.422 23:30:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.422 23:30:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:13:00.422 23:30:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:00.422 23:30:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:00.422 23:30:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:00.422 23:30:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:00.422 23:30:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:00.422 23:30:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:00.422 23:30:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:00.422 23:30:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:00.422 23:30:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:00.422 23:30:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:00.422 23:30:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.422 23:30:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:00.422 23:30:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:00.422 23:30:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.422 23:30:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:00.422 "name": "raid_bdev1", 00:13:00.422 "uuid": "07cbec61-df67-431f-b266-7803f82df393", 00:13:00.422 "strip_size_kb": 0, 00:13:00.422 "state": "online", 00:13:00.422 "raid_level": "raid1", 00:13:00.422 "superblock": true, 00:13:00.422 "num_base_bdevs": 4, 00:13:00.422 "num_base_bdevs_discovered": 4, 00:13:00.422 "num_base_bdevs_operational": 4, 00:13:00.422 "base_bdevs_list": [ 00:13:00.422 { 00:13:00.422 "name": "BaseBdev1", 00:13:00.422 "uuid": "fe4c54c6-8797-59b6-87a4-91cff063fa9e", 00:13:00.422 "is_configured": true, 00:13:00.422 "data_offset": 2048, 00:13:00.422 "data_size": 63488 00:13:00.422 }, 00:13:00.422 { 00:13:00.422 "name": "BaseBdev2", 00:13:00.422 "uuid": "4976f264-161e-5c50-9cc4-e6a0c4d6b248", 00:13:00.422 "is_configured": true, 00:13:00.422 "data_offset": 2048, 00:13:00.422 "data_size": 63488 00:13:00.422 }, 00:13:00.422 { 00:13:00.422 "name": "BaseBdev3", 00:13:00.422 "uuid": "1e3f3d3b-50cc-5c10-8ba7-52a562d98008", 00:13:00.422 "is_configured": true, 00:13:00.422 "data_offset": 2048, 00:13:00.422 "data_size": 63488 00:13:00.422 }, 00:13:00.422 { 00:13:00.422 "name": "BaseBdev4", 00:13:00.422 "uuid": "761a9e60-c6ff-537a-86aa-2137cdde5588", 00:13:00.422 "is_configured": true, 00:13:00.422 "data_offset": 2048, 00:13:00.422 "data_size": 63488 00:13:00.422 } 00:13:00.422 ] 00:13:00.422 }' 00:13:00.422 23:30:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:00.422 23:30:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:00.992 23:30:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:13:00.992 23:30:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:00.992 23:30:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.992 23:30:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:00.992 [2024-09-30 23:30:40.567852] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:00.992 23:30:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.992 23:30:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:13:00.992 23:30:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:00.992 23:30:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.992 23:30:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:13:00.992 23:30:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:00.992 23:30:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.992 23:30:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:13:00.992 23:30:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:13:00.992 23:30:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:13:00.992 23:30:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:13:00.992 23:30:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.992 23:30:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:00.992 [2024-09-30 23:30:40.655394] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:00.992 23:30:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.992 23:30:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:00.992 23:30:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:00.992 23:30:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:00.992 23:30:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:00.992 23:30:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:00.992 23:30:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:00.992 23:30:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:00.992 23:30:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:00.992 23:30:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:00.992 23:30:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:00.992 23:30:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:00.992 23:30:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.992 23:30:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:00.992 23:30:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:00.992 23:30:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.992 23:30:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:00.993 "name": "raid_bdev1", 00:13:00.993 "uuid": "07cbec61-df67-431f-b266-7803f82df393", 00:13:00.993 "strip_size_kb": 0, 00:13:00.993 "state": "online", 00:13:00.993 "raid_level": "raid1", 00:13:00.993 "superblock": true, 00:13:00.993 "num_base_bdevs": 4, 00:13:00.993 "num_base_bdevs_discovered": 3, 00:13:00.993 "num_base_bdevs_operational": 3, 00:13:00.993 "base_bdevs_list": [ 00:13:00.993 { 00:13:00.993 "name": null, 00:13:00.993 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:00.993 "is_configured": false, 00:13:00.993 "data_offset": 0, 00:13:00.993 "data_size": 63488 00:13:00.993 }, 00:13:00.993 { 00:13:00.993 "name": "BaseBdev2", 00:13:00.993 "uuid": "4976f264-161e-5c50-9cc4-e6a0c4d6b248", 00:13:00.993 "is_configured": true, 00:13:00.993 "data_offset": 2048, 00:13:00.993 "data_size": 63488 00:13:00.993 }, 00:13:00.993 { 00:13:00.993 "name": "BaseBdev3", 00:13:00.993 "uuid": "1e3f3d3b-50cc-5c10-8ba7-52a562d98008", 00:13:00.993 "is_configured": true, 00:13:00.993 "data_offset": 2048, 00:13:00.993 "data_size": 63488 00:13:00.993 }, 00:13:00.993 { 00:13:00.993 "name": "BaseBdev4", 00:13:00.993 "uuid": "761a9e60-c6ff-537a-86aa-2137cdde5588", 00:13:00.993 "is_configured": true, 00:13:00.993 "data_offset": 2048, 00:13:00.993 "data_size": 63488 00:13:00.993 } 00:13:00.993 ] 00:13:00.993 }' 00:13:00.993 23:30:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:00.993 23:30:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:00.993 [2024-09-30 23:30:40.742758] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:13:00.993 I/O size of 3145728 is greater than zero copy threshold (65536). 00:13:00.993 Zero copy mechanism will not be used. 00:13:00.993 Running I/O for 60 seconds... 00:13:01.253 23:30:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:01.253 23:30:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:01.253 23:30:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:01.253 [2024-09-30 23:30:41.102880] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:01.512 23:30:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:01.512 23:30:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:13:01.512 [2024-09-30 23:30:41.144332] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:13:01.512 [2024-09-30 23:30:41.146658] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:01.512 [2024-09-30 23:30:41.257449] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:01.512 [2024-09-30 23:30:41.259454] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:01.771 [2024-09-30 23:30:41.475646] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:01.771 [2024-09-30 23:30:41.476733] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:02.032 121.00 IOPS, 363.00 MiB/s [2024-09-30 23:30:41.810843] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:13:02.032 [2024-09-30 23:30:41.811358] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:13:02.292 [2024-09-30 23:30:42.060424] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:13:02.292 23:30:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:02.292 23:30:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:02.292 23:30:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:02.292 23:30:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:02.292 23:30:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:02.292 23:30:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:02.292 23:30:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:02.292 23:30:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.292 23:30:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:02.551 23:30:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.551 23:30:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:02.551 "name": "raid_bdev1", 00:13:02.551 "uuid": "07cbec61-df67-431f-b266-7803f82df393", 00:13:02.551 "strip_size_kb": 0, 00:13:02.551 "state": "online", 00:13:02.551 "raid_level": "raid1", 00:13:02.551 "superblock": true, 00:13:02.551 "num_base_bdevs": 4, 00:13:02.551 "num_base_bdevs_discovered": 4, 00:13:02.551 "num_base_bdevs_operational": 4, 00:13:02.551 "process": { 00:13:02.551 "type": "rebuild", 00:13:02.551 "target": "spare", 00:13:02.551 "progress": { 00:13:02.551 "blocks": 10240, 00:13:02.551 "percent": 16 00:13:02.551 } 00:13:02.551 }, 00:13:02.551 "base_bdevs_list": [ 00:13:02.551 { 00:13:02.551 "name": "spare", 00:13:02.551 "uuid": "9d64e198-36ff-5f24-b5c2-496e5f29b183", 00:13:02.551 "is_configured": true, 00:13:02.551 "data_offset": 2048, 00:13:02.551 "data_size": 63488 00:13:02.551 }, 00:13:02.551 { 00:13:02.551 "name": "BaseBdev2", 00:13:02.551 "uuid": "4976f264-161e-5c50-9cc4-e6a0c4d6b248", 00:13:02.551 "is_configured": true, 00:13:02.551 "data_offset": 2048, 00:13:02.551 "data_size": 63488 00:13:02.551 }, 00:13:02.551 { 00:13:02.551 "name": "BaseBdev3", 00:13:02.551 "uuid": "1e3f3d3b-50cc-5c10-8ba7-52a562d98008", 00:13:02.551 "is_configured": true, 00:13:02.551 "data_offset": 2048, 00:13:02.551 "data_size": 63488 00:13:02.551 }, 00:13:02.551 { 00:13:02.551 "name": "BaseBdev4", 00:13:02.551 "uuid": "761a9e60-c6ff-537a-86aa-2137cdde5588", 00:13:02.551 "is_configured": true, 00:13:02.551 "data_offset": 2048, 00:13:02.551 "data_size": 63488 00:13:02.551 } 00:13:02.551 ] 00:13:02.551 }' 00:13:02.551 23:30:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:02.551 23:30:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:02.551 23:30:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:02.551 23:30:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:02.551 23:30:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:13:02.551 23:30:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.551 23:30:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:02.551 [2024-09-30 23:30:42.274549] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:02.551 [2024-09-30 23:30:42.327503] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:13:02.551 [2024-09-30 23:30:42.329574] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:13:02.810 [2024-09-30 23:30:42.431410] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:02.810 [2024-09-30 23:30:42.443939] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:02.810 [2024-09-30 23:30:42.443993] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:02.810 [2024-09-30 23:30:42.444013] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:02.810 [2024-09-30 23:30:42.458786] bdev_raid.c:1970:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006080 00:13:02.810 23:30:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.810 23:30:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:02.810 23:30:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:02.810 23:30:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:02.810 23:30:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:02.810 23:30:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:02.810 23:30:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:02.810 23:30:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:02.810 23:30:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:02.810 23:30:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:02.810 23:30:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:02.810 23:30:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:02.810 23:30:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:02.810 23:30:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.810 23:30:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:02.810 23:30:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.810 23:30:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:02.810 "name": "raid_bdev1", 00:13:02.810 "uuid": "07cbec61-df67-431f-b266-7803f82df393", 00:13:02.810 "strip_size_kb": 0, 00:13:02.810 "state": "online", 00:13:02.810 "raid_level": "raid1", 00:13:02.810 "superblock": true, 00:13:02.810 "num_base_bdevs": 4, 00:13:02.810 "num_base_bdevs_discovered": 3, 00:13:02.810 "num_base_bdevs_operational": 3, 00:13:02.810 "base_bdevs_list": [ 00:13:02.810 { 00:13:02.810 "name": null, 00:13:02.810 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:02.810 "is_configured": false, 00:13:02.810 "data_offset": 0, 00:13:02.811 "data_size": 63488 00:13:02.811 }, 00:13:02.811 { 00:13:02.811 "name": "BaseBdev2", 00:13:02.811 "uuid": "4976f264-161e-5c50-9cc4-e6a0c4d6b248", 00:13:02.811 "is_configured": true, 00:13:02.811 "data_offset": 2048, 00:13:02.811 "data_size": 63488 00:13:02.811 }, 00:13:02.811 { 00:13:02.811 "name": "BaseBdev3", 00:13:02.811 "uuid": "1e3f3d3b-50cc-5c10-8ba7-52a562d98008", 00:13:02.811 "is_configured": true, 00:13:02.811 "data_offset": 2048, 00:13:02.811 "data_size": 63488 00:13:02.811 }, 00:13:02.811 { 00:13:02.811 "name": "BaseBdev4", 00:13:02.811 "uuid": "761a9e60-c6ff-537a-86aa-2137cdde5588", 00:13:02.811 "is_configured": true, 00:13:02.811 "data_offset": 2048, 00:13:02.811 "data_size": 63488 00:13:02.811 } 00:13:02.811 ] 00:13:02.811 }' 00:13:02.811 23:30:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:02.811 23:30:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:03.329 137.50 IOPS, 412.50 MiB/s 23:30:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:03.329 23:30:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:03.329 23:30:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:03.329 23:30:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:03.329 23:30:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:03.329 23:30:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:03.329 23:30:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:03.329 23:30:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:03.329 23:30:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:03.329 23:30:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:03.329 23:30:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:03.329 "name": "raid_bdev1", 00:13:03.329 "uuid": "07cbec61-df67-431f-b266-7803f82df393", 00:13:03.329 "strip_size_kb": 0, 00:13:03.329 "state": "online", 00:13:03.329 "raid_level": "raid1", 00:13:03.329 "superblock": true, 00:13:03.329 "num_base_bdevs": 4, 00:13:03.329 "num_base_bdevs_discovered": 3, 00:13:03.329 "num_base_bdevs_operational": 3, 00:13:03.329 "base_bdevs_list": [ 00:13:03.329 { 00:13:03.329 "name": null, 00:13:03.329 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:03.329 "is_configured": false, 00:13:03.329 "data_offset": 0, 00:13:03.329 "data_size": 63488 00:13:03.329 }, 00:13:03.329 { 00:13:03.329 "name": "BaseBdev2", 00:13:03.329 "uuid": "4976f264-161e-5c50-9cc4-e6a0c4d6b248", 00:13:03.329 "is_configured": true, 00:13:03.329 "data_offset": 2048, 00:13:03.329 "data_size": 63488 00:13:03.329 }, 00:13:03.329 { 00:13:03.329 "name": "BaseBdev3", 00:13:03.329 "uuid": "1e3f3d3b-50cc-5c10-8ba7-52a562d98008", 00:13:03.329 "is_configured": true, 00:13:03.329 "data_offset": 2048, 00:13:03.329 "data_size": 63488 00:13:03.329 }, 00:13:03.329 { 00:13:03.329 "name": "BaseBdev4", 00:13:03.329 "uuid": "761a9e60-c6ff-537a-86aa-2137cdde5588", 00:13:03.329 "is_configured": true, 00:13:03.329 "data_offset": 2048, 00:13:03.329 "data_size": 63488 00:13:03.329 } 00:13:03.329 ] 00:13:03.329 }' 00:13:03.329 23:30:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:03.329 23:30:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:03.329 23:30:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:03.329 23:30:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:03.329 23:30:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:03.329 23:30:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:03.329 23:30:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:03.329 [2024-09-30 23:30:43.088610] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:03.329 23:30:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:03.329 23:30:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:13:03.329 [2024-09-30 23:30:43.110103] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:13:03.329 [2024-09-30 23:30:43.112401] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:03.588 [2024-09-30 23:30:43.222755] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:03.588 [2024-09-30 23:30:43.224991] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:03.847 [2024-09-30 23:30:43.456018] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:03.847 [2024-09-30 23:30:43.457125] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:04.106 148.33 IOPS, 445.00 MiB/s [2024-09-30 23:30:43.834424] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:13:04.106 [2024-09-30 23:30:43.835217] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:13:04.106 [2024-09-30 23:30:43.941965] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:13:04.106 [2024-09-30 23:30:43.942269] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:13:04.367 23:30:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:04.367 23:30:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:04.367 23:30:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:04.367 23:30:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:04.367 23:30:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:04.367 23:30:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:04.367 23:30:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:04.367 23:30:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.367 23:30:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:04.367 23:30:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.367 23:30:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:04.367 "name": "raid_bdev1", 00:13:04.367 "uuid": "07cbec61-df67-431f-b266-7803f82df393", 00:13:04.367 "strip_size_kb": 0, 00:13:04.367 "state": "online", 00:13:04.367 "raid_level": "raid1", 00:13:04.367 "superblock": true, 00:13:04.367 "num_base_bdevs": 4, 00:13:04.367 "num_base_bdevs_discovered": 4, 00:13:04.367 "num_base_bdevs_operational": 4, 00:13:04.367 "process": { 00:13:04.367 "type": "rebuild", 00:13:04.367 "target": "spare", 00:13:04.367 "progress": { 00:13:04.367 "blocks": 10240, 00:13:04.367 "percent": 16 00:13:04.367 } 00:13:04.367 }, 00:13:04.367 "base_bdevs_list": [ 00:13:04.367 { 00:13:04.367 "name": "spare", 00:13:04.367 "uuid": "9d64e198-36ff-5f24-b5c2-496e5f29b183", 00:13:04.367 "is_configured": true, 00:13:04.367 "data_offset": 2048, 00:13:04.367 "data_size": 63488 00:13:04.367 }, 00:13:04.367 { 00:13:04.367 "name": "BaseBdev2", 00:13:04.367 "uuid": "4976f264-161e-5c50-9cc4-e6a0c4d6b248", 00:13:04.367 "is_configured": true, 00:13:04.367 "data_offset": 2048, 00:13:04.367 "data_size": 63488 00:13:04.367 }, 00:13:04.367 { 00:13:04.367 "name": "BaseBdev3", 00:13:04.367 "uuid": "1e3f3d3b-50cc-5c10-8ba7-52a562d98008", 00:13:04.367 "is_configured": true, 00:13:04.367 "data_offset": 2048, 00:13:04.367 "data_size": 63488 00:13:04.367 }, 00:13:04.367 { 00:13:04.367 "name": "BaseBdev4", 00:13:04.367 "uuid": "761a9e60-c6ff-537a-86aa-2137cdde5588", 00:13:04.367 "is_configured": true, 00:13:04.367 "data_offset": 2048, 00:13:04.367 "data_size": 63488 00:13:04.367 } 00:13:04.367 ] 00:13:04.367 }' 00:13:04.367 23:30:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:04.367 23:30:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:04.367 23:30:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:04.627 23:30:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:04.627 23:30:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:13:04.627 23:30:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:13:04.627 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:13:04.627 23:30:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:13:04.627 23:30:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:13:04.627 23:30:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:13:04.627 23:30:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:13:04.627 23:30:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.627 23:30:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:04.627 [2024-09-30 23:30:44.270053] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:04.887 [2024-09-30 23:30:44.522100] bdev_raid.c:1970:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000006080 00:13:04.887 [2024-09-30 23:30:44.522146] bdev_raid.c:1970:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000006220 00:13:04.887 [2024-09-30 23:30:44.530866] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:13:04.887 23:30:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.887 23:30:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:13:04.887 23:30:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:13:04.887 23:30:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:04.887 23:30:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:04.887 23:30:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:04.887 23:30:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:04.887 23:30:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:04.887 23:30:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:04.887 23:30:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:04.887 23:30:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.887 23:30:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:04.887 23:30:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.887 23:30:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:04.887 "name": "raid_bdev1", 00:13:04.887 "uuid": "07cbec61-df67-431f-b266-7803f82df393", 00:13:04.887 "strip_size_kb": 0, 00:13:04.887 "state": "online", 00:13:04.887 "raid_level": "raid1", 00:13:04.887 "superblock": true, 00:13:04.887 "num_base_bdevs": 4, 00:13:04.887 "num_base_bdevs_discovered": 3, 00:13:04.887 "num_base_bdevs_operational": 3, 00:13:04.887 "process": { 00:13:04.887 "type": "rebuild", 00:13:04.887 "target": "spare", 00:13:04.887 "progress": { 00:13:04.887 "blocks": 14336, 00:13:04.887 "percent": 22 00:13:04.887 } 00:13:04.887 }, 00:13:04.887 "base_bdevs_list": [ 00:13:04.887 { 00:13:04.887 "name": "spare", 00:13:04.887 "uuid": "9d64e198-36ff-5f24-b5c2-496e5f29b183", 00:13:04.887 "is_configured": true, 00:13:04.887 "data_offset": 2048, 00:13:04.887 "data_size": 63488 00:13:04.887 }, 00:13:04.887 { 00:13:04.887 "name": null, 00:13:04.887 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:04.887 "is_configured": false, 00:13:04.887 "data_offset": 0, 00:13:04.887 "data_size": 63488 00:13:04.887 }, 00:13:04.887 { 00:13:04.887 "name": "BaseBdev3", 00:13:04.887 "uuid": "1e3f3d3b-50cc-5c10-8ba7-52a562d98008", 00:13:04.887 "is_configured": true, 00:13:04.887 "data_offset": 2048, 00:13:04.887 "data_size": 63488 00:13:04.887 }, 00:13:04.887 { 00:13:04.887 "name": "BaseBdev4", 00:13:04.887 "uuid": "761a9e60-c6ff-537a-86aa-2137cdde5588", 00:13:04.887 "is_configured": true, 00:13:04.887 "data_offset": 2048, 00:13:04.887 "data_size": 63488 00:13:04.887 } 00:13:04.887 ] 00:13:04.887 }' 00:13:04.887 23:30:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:04.887 23:30:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:04.887 23:30:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:04.887 23:30:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:04.887 23:30:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # local timeout=405 00:13:04.887 23:30:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:04.887 23:30:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:04.887 23:30:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:04.887 23:30:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:04.887 23:30:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:04.887 23:30:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:04.887 23:30:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:04.887 23:30:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:04.887 23:30:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.888 23:30:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:04.888 23:30:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.888 23:30:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:04.888 "name": "raid_bdev1", 00:13:04.888 "uuid": "07cbec61-df67-431f-b266-7803f82df393", 00:13:04.888 "strip_size_kb": 0, 00:13:04.888 "state": "online", 00:13:04.888 "raid_level": "raid1", 00:13:04.888 "superblock": true, 00:13:04.888 "num_base_bdevs": 4, 00:13:04.888 "num_base_bdevs_discovered": 3, 00:13:04.888 "num_base_bdevs_operational": 3, 00:13:04.888 "process": { 00:13:04.888 "type": "rebuild", 00:13:04.888 "target": "spare", 00:13:04.888 "progress": { 00:13:04.888 "blocks": 14336, 00:13:04.888 "percent": 22 00:13:04.888 } 00:13:04.888 }, 00:13:04.888 "base_bdevs_list": [ 00:13:04.888 { 00:13:04.888 "name": "spare", 00:13:04.888 "uuid": "9d64e198-36ff-5f24-b5c2-496e5f29b183", 00:13:04.888 "is_configured": true, 00:13:04.888 "data_offset": 2048, 00:13:04.888 "data_size": 63488 00:13:04.888 }, 00:13:04.888 { 00:13:04.888 "name": null, 00:13:04.888 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:04.888 "is_configured": false, 00:13:04.888 "data_offset": 0, 00:13:04.888 "data_size": 63488 00:13:04.888 }, 00:13:04.888 { 00:13:04.888 "name": "BaseBdev3", 00:13:04.888 "uuid": "1e3f3d3b-50cc-5c10-8ba7-52a562d98008", 00:13:04.888 "is_configured": true, 00:13:04.888 "data_offset": 2048, 00:13:04.888 "data_size": 63488 00:13:04.888 }, 00:13:04.888 { 00:13:04.888 "name": "BaseBdev4", 00:13:04.888 "uuid": "761a9e60-c6ff-537a-86aa-2137cdde5588", 00:13:04.888 "is_configured": true, 00:13:04.888 "data_offset": 2048, 00:13:04.888 "data_size": 63488 00:13:04.888 } 00:13:04.888 ] 00:13:04.888 }' 00:13:04.888 23:30:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:05.147 127.25 IOPS, 381.75 MiB/s [2024-09-30 23:30:44.770134] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:13:05.147 [2024-09-30 23:30:44.770835] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:13:05.147 23:30:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:05.147 23:30:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:05.147 23:30:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:05.147 23:30:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:05.407 [2024-09-30 23:30:45.102120] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:13:05.407 [2024-09-30 23:30:45.103456] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:13:06.000 [2024-09-30 23:30:45.583368] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:13:06.000 113.40 IOPS, 340.20 MiB/s 23:30:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:06.000 23:30:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:06.000 23:30:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:06.000 23:30:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:06.000 23:30:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:06.000 23:30:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:06.283 23:30:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:06.283 23:30:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:06.283 23:30:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.283 23:30:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:06.283 23:30:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.283 23:30:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:06.283 "name": "raid_bdev1", 00:13:06.283 "uuid": "07cbec61-df67-431f-b266-7803f82df393", 00:13:06.283 "strip_size_kb": 0, 00:13:06.283 "state": "online", 00:13:06.283 "raid_level": "raid1", 00:13:06.283 "superblock": true, 00:13:06.283 "num_base_bdevs": 4, 00:13:06.283 "num_base_bdevs_discovered": 3, 00:13:06.283 "num_base_bdevs_operational": 3, 00:13:06.283 "process": { 00:13:06.283 "type": "rebuild", 00:13:06.283 "target": "spare", 00:13:06.283 "progress": { 00:13:06.283 "blocks": 30720, 00:13:06.283 "percent": 48 00:13:06.283 } 00:13:06.283 }, 00:13:06.283 "base_bdevs_list": [ 00:13:06.283 { 00:13:06.283 "name": "spare", 00:13:06.283 "uuid": "9d64e198-36ff-5f24-b5c2-496e5f29b183", 00:13:06.283 "is_configured": true, 00:13:06.283 "data_offset": 2048, 00:13:06.283 "data_size": 63488 00:13:06.283 }, 00:13:06.283 { 00:13:06.283 "name": null, 00:13:06.283 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:06.283 "is_configured": false, 00:13:06.283 "data_offset": 0, 00:13:06.283 "data_size": 63488 00:13:06.283 }, 00:13:06.283 { 00:13:06.283 "name": "BaseBdev3", 00:13:06.283 "uuid": "1e3f3d3b-50cc-5c10-8ba7-52a562d98008", 00:13:06.283 "is_configured": true, 00:13:06.283 "data_offset": 2048, 00:13:06.283 "data_size": 63488 00:13:06.283 }, 00:13:06.283 { 00:13:06.283 "name": "BaseBdev4", 00:13:06.283 "uuid": "761a9e60-c6ff-537a-86aa-2137cdde5588", 00:13:06.283 "is_configured": true, 00:13:06.283 "data_offset": 2048, 00:13:06.283 "data_size": 63488 00:13:06.283 } 00:13:06.283 ] 00:13:06.283 }' 00:13:06.283 23:30:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:06.283 23:30:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:06.283 23:30:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:06.283 23:30:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:06.283 23:30:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:06.283 [2024-09-30 23:30:46.030663] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:13:06.555 [2024-09-30 23:30:46.342145] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:13:06.814 [2024-09-30 23:30:46.558587] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:13:06.814 [2024-09-30 23:30:46.559949] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:13:07.074 100.33 IOPS, 301.00 MiB/s [2024-09-30 23:30:46.785156] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:13:07.333 23:30:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:07.333 23:30:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:07.333 23:30:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:07.333 23:30:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:07.333 23:30:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:07.333 23:30:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:07.333 23:30:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:07.333 23:30:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:07.333 23:30:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:07.333 23:30:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:07.333 23:30:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:07.333 23:30:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:07.333 "name": "raid_bdev1", 00:13:07.333 "uuid": "07cbec61-df67-431f-b266-7803f82df393", 00:13:07.333 "strip_size_kb": 0, 00:13:07.333 "state": "online", 00:13:07.333 "raid_level": "raid1", 00:13:07.333 "superblock": true, 00:13:07.333 "num_base_bdevs": 4, 00:13:07.333 "num_base_bdevs_discovered": 3, 00:13:07.333 "num_base_bdevs_operational": 3, 00:13:07.333 "process": { 00:13:07.333 "type": "rebuild", 00:13:07.333 "target": "spare", 00:13:07.333 "progress": { 00:13:07.333 "blocks": 49152, 00:13:07.333 "percent": 77 00:13:07.333 } 00:13:07.333 }, 00:13:07.333 "base_bdevs_list": [ 00:13:07.333 { 00:13:07.333 "name": "spare", 00:13:07.333 "uuid": "9d64e198-36ff-5f24-b5c2-496e5f29b183", 00:13:07.333 "is_configured": true, 00:13:07.333 "data_offset": 2048, 00:13:07.333 "data_size": 63488 00:13:07.333 }, 00:13:07.333 { 00:13:07.333 "name": null, 00:13:07.333 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:07.333 "is_configured": false, 00:13:07.333 "data_offset": 0, 00:13:07.333 "data_size": 63488 00:13:07.333 }, 00:13:07.333 { 00:13:07.333 "name": "BaseBdev3", 00:13:07.333 "uuid": "1e3f3d3b-50cc-5c10-8ba7-52a562d98008", 00:13:07.333 "is_configured": true, 00:13:07.333 "data_offset": 2048, 00:13:07.333 "data_size": 63488 00:13:07.333 }, 00:13:07.333 { 00:13:07.333 "name": "BaseBdev4", 00:13:07.333 "uuid": "761a9e60-c6ff-537a-86aa-2137cdde5588", 00:13:07.333 "is_configured": true, 00:13:07.333 "data_offset": 2048, 00:13:07.333 "data_size": 63488 00:13:07.333 } 00:13:07.333 ] 00:13:07.333 }' 00:13:07.333 23:30:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:07.333 23:30:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:07.333 23:30:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:07.333 23:30:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:07.333 23:30:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:07.901 [2024-09-30 23:30:47.445793] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 59392 offset_begin: 55296 offset_end: 61440 00:13:08.160 91.14 IOPS, 273.43 MiB/s [2024-09-30 23:30:47.771809] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:13:08.160 [2024-09-30 23:30:47.871599] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:13:08.160 [2024-09-30 23:30:47.875946] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:08.419 23:30:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:08.419 23:30:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:08.419 23:30:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:08.419 23:30:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:08.419 23:30:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:08.419 23:30:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:08.419 23:30:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:08.419 23:30:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:08.419 23:30:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.419 23:30:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:08.419 23:30:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.419 23:30:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:08.419 "name": "raid_bdev1", 00:13:08.419 "uuid": "07cbec61-df67-431f-b266-7803f82df393", 00:13:08.419 "strip_size_kb": 0, 00:13:08.419 "state": "online", 00:13:08.419 "raid_level": "raid1", 00:13:08.419 "superblock": true, 00:13:08.419 "num_base_bdevs": 4, 00:13:08.419 "num_base_bdevs_discovered": 3, 00:13:08.419 "num_base_bdevs_operational": 3, 00:13:08.419 "base_bdevs_list": [ 00:13:08.419 { 00:13:08.419 "name": "spare", 00:13:08.419 "uuid": "9d64e198-36ff-5f24-b5c2-496e5f29b183", 00:13:08.419 "is_configured": true, 00:13:08.419 "data_offset": 2048, 00:13:08.419 "data_size": 63488 00:13:08.419 }, 00:13:08.419 { 00:13:08.419 "name": null, 00:13:08.419 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:08.419 "is_configured": false, 00:13:08.419 "data_offset": 0, 00:13:08.419 "data_size": 63488 00:13:08.419 }, 00:13:08.419 { 00:13:08.419 "name": "BaseBdev3", 00:13:08.419 "uuid": "1e3f3d3b-50cc-5c10-8ba7-52a562d98008", 00:13:08.419 "is_configured": true, 00:13:08.419 "data_offset": 2048, 00:13:08.419 "data_size": 63488 00:13:08.419 }, 00:13:08.419 { 00:13:08.419 "name": "BaseBdev4", 00:13:08.419 "uuid": "761a9e60-c6ff-537a-86aa-2137cdde5588", 00:13:08.419 "is_configured": true, 00:13:08.419 "data_offset": 2048, 00:13:08.419 "data_size": 63488 00:13:08.419 } 00:13:08.419 ] 00:13:08.419 }' 00:13:08.419 23:30:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:08.419 23:30:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:13:08.419 23:30:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:08.678 23:30:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:13:08.678 23:30:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@709 -- # break 00:13:08.678 23:30:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:08.678 23:30:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:08.678 23:30:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:08.678 23:30:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:08.678 23:30:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:08.678 23:30:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:08.678 23:30:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:08.678 23:30:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.678 23:30:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:08.678 23:30:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.678 23:30:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:08.678 "name": "raid_bdev1", 00:13:08.678 "uuid": "07cbec61-df67-431f-b266-7803f82df393", 00:13:08.678 "strip_size_kb": 0, 00:13:08.678 "state": "online", 00:13:08.678 "raid_level": "raid1", 00:13:08.678 "superblock": true, 00:13:08.678 "num_base_bdevs": 4, 00:13:08.678 "num_base_bdevs_discovered": 3, 00:13:08.678 "num_base_bdevs_operational": 3, 00:13:08.678 "base_bdevs_list": [ 00:13:08.678 { 00:13:08.678 "name": "spare", 00:13:08.678 "uuid": "9d64e198-36ff-5f24-b5c2-496e5f29b183", 00:13:08.678 "is_configured": true, 00:13:08.678 "data_offset": 2048, 00:13:08.678 "data_size": 63488 00:13:08.678 }, 00:13:08.678 { 00:13:08.678 "name": null, 00:13:08.678 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:08.678 "is_configured": false, 00:13:08.678 "data_offset": 0, 00:13:08.678 "data_size": 63488 00:13:08.678 }, 00:13:08.678 { 00:13:08.678 "name": "BaseBdev3", 00:13:08.678 "uuid": "1e3f3d3b-50cc-5c10-8ba7-52a562d98008", 00:13:08.678 "is_configured": true, 00:13:08.678 "data_offset": 2048, 00:13:08.678 "data_size": 63488 00:13:08.678 }, 00:13:08.678 { 00:13:08.678 "name": "BaseBdev4", 00:13:08.678 "uuid": "761a9e60-c6ff-537a-86aa-2137cdde5588", 00:13:08.678 "is_configured": true, 00:13:08.678 "data_offset": 2048, 00:13:08.678 "data_size": 63488 00:13:08.678 } 00:13:08.678 ] 00:13:08.678 }' 00:13:08.678 23:30:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:08.678 23:30:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:08.678 23:30:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:08.678 23:30:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:08.678 23:30:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:08.678 23:30:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:08.678 23:30:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:08.678 23:30:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:08.678 23:30:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:08.678 23:30:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:08.678 23:30:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:08.678 23:30:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:08.678 23:30:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:08.678 23:30:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:08.678 23:30:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:08.678 23:30:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:08.678 23:30:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.678 23:30:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:08.678 23:30:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.678 23:30:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:08.678 "name": "raid_bdev1", 00:13:08.678 "uuid": "07cbec61-df67-431f-b266-7803f82df393", 00:13:08.678 "strip_size_kb": 0, 00:13:08.678 "state": "online", 00:13:08.678 "raid_level": "raid1", 00:13:08.678 "superblock": true, 00:13:08.678 "num_base_bdevs": 4, 00:13:08.678 "num_base_bdevs_discovered": 3, 00:13:08.678 "num_base_bdevs_operational": 3, 00:13:08.678 "base_bdevs_list": [ 00:13:08.678 { 00:13:08.678 "name": "spare", 00:13:08.678 "uuid": "9d64e198-36ff-5f24-b5c2-496e5f29b183", 00:13:08.678 "is_configured": true, 00:13:08.678 "data_offset": 2048, 00:13:08.678 "data_size": 63488 00:13:08.678 }, 00:13:08.678 { 00:13:08.678 "name": null, 00:13:08.678 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:08.678 "is_configured": false, 00:13:08.678 "data_offset": 0, 00:13:08.678 "data_size": 63488 00:13:08.678 }, 00:13:08.678 { 00:13:08.678 "name": "BaseBdev3", 00:13:08.678 "uuid": "1e3f3d3b-50cc-5c10-8ba7-52a562d98008", 00:13:08.678 "is_configured": true, 00:13:08.678 "data_offset": 2048, 00:13:08.678 "data_size": 63488 00:13:08.678 }, 00:13:08.678 { 00:13:08.678 "name": "BaseBdev4", 00:13:08.678 "uuid": "761a9e60-c6ff-537a-86aa-2137cdde5588", 00:13:08.678 "is_configured": true, 00:13:08.678 "data_offset": 2048, 00:13:08.678 "data_size": 63488 00:13:08.678 } 00:13:08.678 ] 00:13:08.678 }' 00:13:08.678 23:30:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:08.678 23:30:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:09.195 84.50 IOPS, 253.50 MiB/s 23:30:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:09.195 23:30:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:09.195 23:30:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:09.195 [2024-09-30 23:30:48.842581] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:09.195 [2024-09-30 23:30:48.842628] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:09.195 00:13:09.195 Latency(us) 00:13:09.195 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:09.195 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:13:09.195 raid_bdev1 : 8.12 83.49 250.46 0.00 0.00 17014.58 291.55 118136.51 00:13:09.195 =================================================================================================================== 00:13:09.195 Total : 83.49 250.46 0.00 0.00 17014.58 291.55 118136.51 00:13:09.196 [2024-09-30 23:30:48.854019] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:09.196 [2024-09-30 23:30:48.854070] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:09.196 [2024-09-30 23:30:48.854196] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:09.196 [2024-09-30 23:30:48.854219] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:13:09.196 { 00:13:09.196 "results": [ 00:13:09.196 { 00:13:09.196 "job": "raid_bdev1", 00:13:09.196 "core_mask": "0x1", 00:13:09.196 "workload": "randrw", 00:13:09.196 "percentage": 50, 00:13:09.196 "status": "finished", 00:13:09.196 "queue_depth": 2, 00:13:09.196 "io_size": 3145728, 00:13:09.196 "runtime": 8.121019, 00:13:09.196 "iops": 83.48705993669022, 00:13:09.196 "mibps": 250.46117981007063, 00:13:09.196 "io_failed": 0, 00:13:09.196 "io_timeout": 0, 00:13:09.196 "avg_latency_us": 17014.575641174273, 00:13:09.196 "min_latency_us": 291.54934497816595, 00:13:09.196 "max_latency_us": 118136.51004366812 00:13:09.196 } 00:13:09.196 ], 00:13:09.196 "core_count": 1 00:13:09.196 } 00:13:09.196 23:30:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:09.196 23:30:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:09.196 23:30:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # jq length 00:13:09.196 23:30:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:09.196 23:30:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:09.196 23:30:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:09.196 23:30:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:13:09.196 23:30:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:13:09.196 23:30:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:13:09.196 23:30:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:13:09.196 23:30:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:09.196 23:30:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:13:09.196 23:30:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:09.196 23:30:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:13:09.196 23:30:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:09.196 23:30:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:13:09.196 23:30:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:09.196 23:30:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:09.196 23:30:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:13:09.454 /dev/nbd0 00:13:09.454 23:30:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:09.454 23:30:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:09.454 23:30:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:13:09.454 23:30:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # local i 00:13:09.454 23:30:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:09.454 23:30:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:09.454 23:30:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:13:09.454 23:30:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # break 00:13:09.454 23:30:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:09.454 23:30:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:09.454 23:30:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:09.454 1+0 records in 00:13:09.454 1+0 records out 00:13:09.454 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000371628 s, 11.0 MB/s 00:13:09.454 23:30:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:09.454 23:30:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # size=4096 00:13:09.454 23:30:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:09.454 23:30:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:09.454 23:30:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # return 0 00:13:09.454 23:30:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:09.455 23:30:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:09.455 23:30:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:13:09.455 23:30:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z '' ']' 00:13:09.455 23:30:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@728 -- # continue 00:13:09.455 23:30:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:13:09.455 23:30:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev3 ']' 00:13:09.455 23:30:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev3 /dev/nbd1 00:13:09.455 23:30:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:09.455 23:30:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:13:09.455 23:30:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:09.455 23:30:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:13:09.455 23:30:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:09.455 23:30:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:13:09.455 23:30:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:09.455 23:30:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:09.455 23:30:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:13:09.714 /dev/nbd1 00:13:09.714 23:30:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:13:09.714 23:30:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:13:09.714 23:30:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:13:09.714 23:30:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # local i 00:13:09.714 23:30:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:09.714 23:30:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:09.714 23:30:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:13:09.714 23:30:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # break 00:13:09.714 23:30:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:09.714 23:30:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:09.714 23:30:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:09.714 1+0 records in 00:13:09.714 1+0 records out 00:13:09.714 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000403678 s, 10.1 MB/s 00:13:09.714 23:30:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:09.714 23:30:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # size=4096 00:13:09.714 23:30:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:09.714 23:30:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:09.714 23:30:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # return 0 00:13:09.714 23:30:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:09.714 23:30:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:09.714 23:30:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:13:09.714 23:30:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:13:09.714 23:30:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:09.714 23:30:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:13:09.714 23:30:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:09.714 23:30:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:13:09.714 23:30:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:09.714 23:30:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:13:09.973 23:30:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:09.973 23:30:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:09.973 23:30:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:09.973 23:30:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:09.973 23:30:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:09.973 23:30:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:09.973 23:30:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:13:09.973 23:30:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:13:09.973 23:30:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:13:09.973 23:30:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev4 ']' 00:13:09.973 23:30:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev4 /dev/nbd1 00:13:09.973 23:30:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:09.973 23:30:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:13:09.973 23:30:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:09.973 23:30:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:13:09.973 23:30:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:09.973 23:30:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:13:09.973 23:30:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:09.973 23:30:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:09.973 23:30:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:13:10.232 /dev/nbd1 00:13:10.232 23:30:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:13:10.232 23:30:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:13:10.232 23:30:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:13:10.232 23:30:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # local i 00:13:10.232 23:30:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:10.232 23:30:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:10.232 23:30:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:13:10.232 23:30:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # break 00:13:10.232 23:30:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:10.232 23:30:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:10.232 23:30:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:10.232 1+0 records in 00:13:10.232 1+0 records out 00:13:10.232 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000320683 s, 12.8 MB/s 00:13:10.232 23:30:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:10.232 23:30:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # size=4096 00:13:10.232 23:30:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:10.232 23:30:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:10.232 23:30:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # return 0 00:13:10.232 23:30:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:10.232 23:30:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:10.232 23:30:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:13:10.232 23:30:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:13:10.232 23:30:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:10.232 23:30:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:13:10.232 23:30:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:10.232 23:30:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:13:10.232 23:30:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:10.232 23:30:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:13:10.493 23:30:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:10.493 23:30:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:10.493 23:30:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:10.493 23:30:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:10.493 23:30:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:10.493 23:30:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:10.493 23:30:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:13:10.493 23:30:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:13:10.493 23:30:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:13:10.493 23:30:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:10.493 23:30:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:13:10.493 23:30:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:10.493 23:30:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:13:10.493 23:30:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:10.493 23:30:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:10.752 23:30:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:10.753 23:30:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:10.753 23:30:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:10.753 23:30:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:10.753 23:30:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:10.753 23:30:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:10.753 23:30:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:13:10.753 23:30:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:13:10.753 23:30:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:13:10.753 23:30:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:13:10.753 23:30:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:10.753 23:30:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:10.753 23:30:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:10.753 23:30:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:10.753 23:30:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:10.753 23:30:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:10.753 [2024-09-30 23:30:50.471403] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:10.753 [2024-09-30 23:30:50.471463] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:10.753 [2024-09-30 23:30:50.471484] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:13:10.753 [2024-09-30 23:30:50.471499] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:10.753 [2024-09-30 23:30:50.474060] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:10.753 [2024-09-30 23:30:50.474098] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:10.753 [2024-09-30 23:30:50.474191] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:13:10.753 [2024-09-30 23:30:50.474242] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:10.753 [2024-09-30 23:30:50.474362] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:10.753 [2024-09-30 23:30:50.474481] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:10.753 spare 00:13:10.753 23:30:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:10.753 23:30:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:13:10.753 23:30:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:10.753 23:30:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:10.753 [2024-09-30 23:30:50.574385] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006600 00:13:10.753 [2024-09-30 23:30:50.574426] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:10.753 [2024-09-30 23:30:50.574733] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000036fc0 00:13:10.753 [2024-09-30 23:30:50.574926] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006600 00:13:10.753 [2024-09-30 23:30:50.574943] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006600 00:13:10.753 [2024-09-30 23:30:50.575094] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:10.753 23:30:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:10.753 23:30:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:10.753 23:30:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:10.753 23:30:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:10.753 23:30:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:10.753 23:30:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:10.753 23:30:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:10.753 23:30:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:10.753 23:30:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:10.753 23:30:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:10.753 23:30:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:10.753 23:30:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:10.753 23:30:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:10.753 23:30:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:10.753 23:30:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:10.753 23:30:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.013 23:30:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:11.013 "name": "raid_bdev1", 00:13:11.013 "uuid": "07cbec61-df67-431f-b266-7803f82df393", 00:13:11.013 "strip_size_kb": 0, 00:13:11.013 "state": "online", 00:13:11.013 "raid_level": "raid1", 00:13:11.013 "superblock": true, 00:13:11.013 "num_base_bdevs": 4, 00:13:11.013 "num_base_bdevs_discovered": 3, 00:13:11.013 "num_base_bdevs_operational": 3, 00:13:11.013 "base_bdevs_list": [ 00:13:11.013 { 00:13:11.013 "name": "spare", 00:13:11.013 "uuid": "9d64e198-36ff-5f24-b5c2-496e5f29b183", 00:13:11.013 "is_configured": true, 00:13:11.013 "data_offset": 2048, 00:13:11.013 "data_size": 63488 00:13:11.013 }, 00:13:11.013 { 00:13:11.013 "name": null, 00:13:11.013 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:11.013 "is_configured": false, 00:13:11.013 "data_offset": 2048, 00:13:11.013 "data_size": 63488 00:13:11.013 }, 00:13:11.013 { 00:13:11.013 "name": "BaseBdev3", 00:13:11.013 "uuid": "1e3f3d3b-50cc-5c10-8ba7-52a562d98008", 00:13:11.013 "is_configured": true, 00:13:11.013 "data_offset": 2048, 00:13:11.013 "data_size": 63488 00:13:11.013 }, 00:13:11.013 { 00:13:11.013 "name": "BaseBdev4", 00:13:11.013 "uuid": "761a9e60-c6ff-537a-86aa-2137cdde5588", 00:13:11.013 "is_configured": true, 00:13:11.013 "data_offset": 2048, 00:13:11.013 "data_size": 63488 00:13:11.013 } 00:13:11.013 ] 00:13:11.013 }' 00:13:11.013 23:30:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:11.013 23:30:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:11.273 23:30:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:11.273 23:30:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:11.273 23:30:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:11.273 23:30:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:11.273 23:30:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:11.273 23:30:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:11.273 23:30:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.273 23:30:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:11.273 23:30:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:11.273 23:30:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.273 23:30:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:11.273 "name": "raid_bdev1", 00:13:11.273 "uuid": "07cbec61-df67-431f-b266-7803f82df393", 00:13:11.273 "strip_size_kb": 0, 00:13:11.273 "state": "online", 00:13:11.273 "raid_level": "raid1", 00:13:11.273 "superblock": true, 00:13:11.273 "num_base_bdevs": 4, 00:13:11.273 "num_base_bdevs_discovered": 3, 00:13:11.273 "num_base_bdevs_operational": 3, 00:13:11.273 "base_bdevs_list": [ 00:13:11.273 { 00:13:11.273 "name": "spare", 00:13:11.273 "uuid": "9d64e198-36ff-5f24-b5c2-496e5f29b183", 00:13:11.273 "is_configured": true, 00:13:11.273 "data_offset": 2048, 00:13:11.273 "data_size": 63488 00:13:11.273 }, 00:13:11.273 { 00:13:11.273 "name": null, 00:13:11.273 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:11.273 "is_configured": false, 00:13:11.273 "data_offset": 2048, 00:13:11.273 "data_size": 63488 00:13:11.273 }, 00:13:11.273 { 00:13:11.273 "name": "BaseBdev3", 00:13:11.273 "uuid": "1e3f3d3b-50cc-5c10-8ba7-52a562d98008", 00:13:11.273 "is_configured": true, 00:13:11.273 "data_offset": 2048, 00:13:11.273 "data_size": 63488 00:13:11.273 }, 00:13:11.273 { 00:13:11.273 "name": "BaseBdev4", 00:13:11.273 "uuid": "761a9e60-c6ff-537a-86aa-2137cdde5588", 00:13:11.273 "is_configured": true, 00:13:11.273 "data_offset": 2048, 00:13:11.273 "data_size": 63488 00:13:11.273 } 00:13:11.273 ] 00:13:11.273 }' 00:13:11.273 23:30:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:11.273 23:30:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:11.273 23:30:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:11.534 23:30:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:11.534 23:30:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:11.534 23:30:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:13:11.534 23:30:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.534 23:30:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:11.534 23:30:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.534 23:30:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:13:11.534 23:30:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:13:11.534 23:30:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.534 23:30:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:11.534 [2024-09-30 23:30:51.214361] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:11.534 23:30:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.534 23:30:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:11.534 23:30:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:11.534 23:30:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:11.534 23:30:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:11.534 23:30:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:11.534 23:30:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:11.534 23:30:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:11.534 23:30:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:11.534 23:30:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:11.534 23:30:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:11.534 23:30:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:11.534 23:30:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.534 23:30:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:11.534 23:30:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:11.535 23:30:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.535 23:30:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:11.535 "name": "raid_bdev1", 00:13:11.535 "uuid": "07cbec61-df67-431f-b266-7803f82df393", 00:13:11.535 "strip_size_kb": 0, 00:13:11.535 "state": "online", 00:13:11.535 "raid_level": "raid1", 00:13:11.535 "superblock": true, 00:13:11.535 "num_base_bdevs": 4, 00:13:11.535 "num_base_bdevs_discovered": 2, 00:13:11.535 "num_base_bdevs_operational": 2, 00:13:11.535 "base_bdevs_list": [ 00:13:11.535 { 00:13:11.535 "name": null, 00:13:11.535 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:11.535 "is_configured": false, 00:13:11.535 "data_offset": 0, 00:13:11.535 "data_size": 63488 00:13:11.535 }, 00:13:11.535 { 00:13:11.535 "name": null, 00:13:11.535 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:11.535 "is_configured": false, 00:13:11.535 "data_offset": 2048, 00:13:11.535 "data_size": 63488 00:13:11.535 }, 00:13:11.535 { 00:13:11.535 "name": "BaseBdev3", 00:13:11.535 "uuid": "1e3f3d3b-50cc-5c10-8ba7-52a562d98008", 00:13:11.535 "is_configured": true, 00:13:11.535 "data_offset": 2048, 00:13:11.535 "data_size": 63488 00:13:11.535 }, 00:13:11.535 { 00:13:11.535 "name": "BaseBdev4", 00:13:11.535 "uuid": "761a9e60-c6ff-537a-86aa-2137cdde5588", 00:13:11.535 "is_configured": true, 00:13:11.535 "data_offset": 2048, 00:13:11.535 "data_size": 63488 00:13:11.535 } 00:13:11.535 ] 00:13:11.535 }' 00:13:11.535 23:30:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:11.535 23:30:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:11.794 23:30:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:11.794 23:30:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.794 23:30:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:11.794 [2024-09-30 23:30:51.641685] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:11.794 [2024-09-30 23:30:51.641938] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:13:11.794 [2024-09-30 23:30:51.641961] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:13:11.795 [2024-09-30 23:30:51.641999] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:12.054 [2024-09-30 23:30:51.648401] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037090 00:13:12.054 23:30:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.054 23:30:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@757 -- # sleep 1 00:13:12.054 [2024-09-30 23:30:51.650606] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:12.992 23:30:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:12.992 23:30:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:12.992 23:30:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:12.992 23:30:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:12.992 23:30:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:12.992 23:30:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:12.992 23:30:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:12.992 23:30:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.992 23:30:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:12.992 23:30:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.992 23:30:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:12.992 "name": "raid_bdev1", 00:13:12.992 "uuid": "07cbec61-df67-431f-b266-7803f82df393", 00:13:12.992 "strip_size_kb": 0, 00:13:12.992 "state": "online", 00:13:12.992 "raid_level": "raid1", 00:13:12.992 "superblock": true, 00:13:12.992 "num_base_bdevs": 4, 00:13:12.992 "num_base_bdevs_discovered": 3, 00:13:12.992 "num_base_bdevs_operational": 3, 00:13:12.992 "process": { 00:13:12.992 "type": "rebuild", 00:13:12.992 "target": "spare", 00:13:12.992 "progress": { 00:13:12.992 "blocks": 20480, 00:13:12.992 "percent": 32 00:13:12.992 } 00:13:12.992 }, 00:13:12.992 "base_bdevs_list": [ 00:13:12.992 { 00:13:12.992 "name": "spare", 00:13:12.992 "uuid": "9d64e198-36ff-5f24-b5c2-496e5f29b183", 00:13:12.992 "is_configured": true, 00:13:12.992 "data_offset": 2048, 00:13:12.992 "data_size": 63488 00:13:12.992 }, 00:13:12.992 { 00:13:12.992 "name": null, 00:13:12.992 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:12.992 "is_configured": false, 00:13:12.992 "data_offset": 2048, 00:13:12.992 "data_size": 63488 00:13:12.992 }, 00:13:12.992 { 00:13:12.992 "name": "BaseBdev3", 00:13:12.992 "uuid": "1e3f3d3b-50cc-5c10-8ba7-52a562d98008", 00:13:12.992 "is_configured": true, 00:13:12.992 "data_offset": 2048, 00:13:12.992 "data_size": 63488 00:13:12.992 }, 00:13:12.992 { 00:13:12.992 "name": "BaseBdev4", 00:13:12.992 "uuid": "761a9e60-c6ff-537a-86aa-2137cdde5588", 00:13:12.992 "is_configured": true, 00:13:12.992 "data_offset": 2048, 00:13:12.992 "data_size": 63488 00:13:12.992 } 00:13:12.992 ] 00:13:12.992 }' 00:13:12.992 23:30:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:12.992 23:30:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:12.992 23:30:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:12.992 23:30:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:12.992 23:30:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:13:12.992 23:30:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.993 23:30:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:12.993 [2024-09-30 23:30:52.791245] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:13.252 [2024-09-30 23:30:52.858714] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:13.252 [2024-09-30 23:30:52.858782] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:13.252 [2024-09-30 23:30:52.858798] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:13.252 [2024-09-30 23:30:52.858808] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:13.252 23:30:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:13.252 23:30:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:13.252 23:30:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:13.252 23:30:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:13.252 23:30:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:13.252 23:30:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:13.252 23:30:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:13.252 23:30:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:13.253 23:30:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:13.253 23:30:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:13.253 23:30:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:13.253 23:30:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:13.253 23:30:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:13.253 23:30:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:13.253 23:30:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:13.253 23:30:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:13.253 23:30:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:13.253 "name": "raid_bdev1", 00:13:13.253 "uuid": "07cbec61-df67-431f-b266-7803f82df393", 00:13:13.253 "strip_size_kb": 0, 00:13:13.253 "state": "online", 00:13:13.253 "raid_level": "raid1", 00:13:13.253 "superblock": true, 00:13:13.253 "num_base_bdevs": 4, 00:13:13.253 "num_base_bdevs_discovered": 2, 00:13:13.253 "num_base_bdevs_operational": 2, 00:13:13.253 "base_bdevs_list": [ 00:13:13.253 { 00:13:13.253 "name": null, 00:13:13.253 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:13.253 "is_configured": false, 00:13:13.253 "data_offset": 0, 00:13:13.253 "data_size": 63488 00:13:13.253 }, 00:13:13.253 { 00:13:13.253 "name": null, 00:13:13.253 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:13.253 "is_configured": false, 00:13:13.253 "data_offset": 2048, 00:13:13.253 "data_size": 63488 00:13:13.253 }, 00:13:13.253 { 00:13:13.253 "name": "BaseBdev3", 00:13:13.253 "uuid": "1e3f3d3b-50cc-5c10-8ba7-52a562d98008", 00:13:13.253 "is_configured": true, 00:13:13.253 "data_offset": 2048, 00:13:13.253 "data_size": 63488 00:13:13.253 }, 00:13:13.253 { 00:13:13.253 "name": "BaseBdev4", 00:13:13.253 "uuid": "761a9e60-c6ff-537a-86aa-2137cdde5588", 00:13:13.253 "is_configured": true, 00:13:13.253 "data_offset": 2048, 00:13:13.253 "data_size": 63488 00:13:13.253 } 00:13:13.253 ] 00:13:13.253 }' 00:13:13.253 23:30:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:13.253 23:30:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:13.512 23:30:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:13.512 23:30:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:13.512 23:30:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:13.512 [2024-09-30 23:30:53.337420] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:13.512 [2024-09-30 23:30:53.337503] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:13.512 [2024-09-30 23:30:53.337533] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:13:13.512 [2024-09-30 23:30:53.337546] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:13.512 [2024-09-30 23:30:53.338091] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:13.512 [2024-09-30 23:30:53.338114] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:13.512 [2024-09-30 23:30:53.338214] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:13:13.512 [2024-09-30 23:30:53.338237] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:13:13.512 [2024-09-30 23:30:53.338249] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:13:13.513 [2024-09-30 23:30:53.338291] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:13.513 [2024-09-30 23:30:53.344649] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037160 00:13:13.513 spare 00:13:13.513 23:30:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:13.513 23:30:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@764 -- # sleep 1 00:13:13.513 [2024-09-30 23:30:53.346835] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:14.892 23:30:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:14.892 23:30:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:14.892 23:30:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:14.892 23:30:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:14.892 23:30:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:14.892 23:30:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:14.892 23:30:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:14.892 23:30:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:14.892 23:30:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:14.892 23:30:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:14.892 23:30:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:14.892 "name": "raid_bdev1", 00:13:14.892 "uuid": "07cbec61-df67-431f-b266-7803f82df393", 00:13:14.892 "strip_size_kb": 0, 00:13:14.892 "state": "online", 00:13:14.892 "raid_level": "raid1", 00:13:14.892 "superblock": true, 00:13:14.892 "num_base_bdevs": 4, 00:13:14.892 "num_base_bdevs_discovered": 3, 00:13:14.892 "num_base_bdevs_operational": 3, 00:13:14.892 "process": { 00:13:14.892 "type": "rebuild", 00:13:14.892 "target": "spare", 00:13:14.892 "progress": { 00:13:14.892 "blocks": 20480, 00:13:14.892 "percent": 32 00:13:14.892 } 00:13:14.892 }, 00:13:14.892 "base_bdevs_list": [ 00:13:14.892 { 00:13:14.892 "name": "spare", 00:13:14.892 "uuid": "9d64e198-36ff-5f24-b5c2-496e5f29b183", 00:13:14.892 "is_configured": true, 00:13:14.892 "data_offset": 2048, 00:13:14.892 "data_size": 63488 00:13:14.892 }, 00:13:14.892 { 00:13:14.892 "name": null, 00:13:14.892 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:14.892 "is_configured": false, 00:13:14.892 "data_offset": 2048, 00:13:14.892 "data_size": 63488 00:13:14.892 }, 00:13:14.892 { 00:13:14.892 "name": "BaseBdev3", 00:13:14.892 "uuid": "1e3f3d3b-50cc-5c10-8ba7-52a562d98008", 00:13:14.892 "is_configured": true, 00:13:14.892 "data_offset": 2048, 00:13:14.892 "data_size": 63488 00:13:14.892 }, 00:13:14.892 { 00:13:14.892 "name": "BaseBdev4", 00:13:14.892 "uuid": "761a9e60-c6ff-537a-86aa-2137cdde5588", 00:13:14.892 "is_configured": true, 00:13:14.892 "data_offset": 2048, 00:13:14.892 "data_size": 63488 00:13:14.892 } 00:13:14.892 ] 00:13:14.892 }' 00:13:14.892 23:30:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:14.892 23:30:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:14.893 23:30:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:14.893 23:30:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:14.893 23:30:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:13:14.893 23:30:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:14.893 23:30:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:14.893 [2024-09-30 23:30:54.483479] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:14.893 [2024-09-30 23:30:54.553876] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:14.893 [2024-09-30 23:30:54.553935] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:14.893 [2024-09-30 23:30:54.553955] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:14.893 [2024-09-30 23:30:54.553965] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:14.893 23:30:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:14.893 23:30:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:14.893 23:30:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:14.893 23:30:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:14.893 23:30:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:14.893 23:30:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:14.893 23:30:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:14.893 23:30:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:14.893 23:30:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:14.893 23:30:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:14.893 23:30:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:14.893 23:30:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:14.893 23:30:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:14.893 23:30:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:14.893 23:30:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:14.893 23:30:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:14.893 23:30:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:14.893 "name": "raid_bdev1", 00:13:14.893 "uuid": "07cbec61-df67-431f-b266-7803f82df393", 00:13:14.893 "strip_size_kb": 0, 00:13:14.893 "state": "online", 00:13:14.893 "raid_level": "raid1", 00:13:14.893 "superblock": true, 00:13:14.893 "num_base_bdevs": 4, 00:13:14.893 "num_base_bdevs_discovered": 2, 00:13:14.893 "num_base_bdevs_operational": 2, 00:13:14.893 "base_bdevs_list": [ 00:13:14.893 { 00:13:14.893 "name": null, 00:13:14.893 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:14.893 "is_configured": false, 00:13:14.893 "data_offset": 0, 00:13:14.893 "data_size": 63488 00:13:14.893 }, 00:13:14.893 { 00:13:14.893 "name": null, 00:13:14.893 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:14.893 "is_configured": false, 00:13:14.893 "data_offset": 2048, 00:13:14.893 "data_size": 63488 00:13:14.893 }, 00:13:14.893 { 00:13:14.893 "name": "BaseBdev3", 00:13:14.893 "uuid": "1e3f3d3b-50cc-5c10-8ba7-52a562d98008", 00:13:14.893 "is_configured": true, 00:13:14.893 "data_offset": 2048, 00:13:14.893 "data_size": 63488 00:13:14.893 }, 00:13:14.893 { 00:13:14.893 "name": "BaseBdev4", 00:13:14.893 "uuid": "761a9e60-c6ff-537a-86aa-2137cdde5588", 00:13:14.893 "is_configured": true, 00:13:14.893 "data_offset": 2048, 00:13:14.893 "data_size": 63488 00:13:14.893 } 00:13:14.893 ] 00:13:14.893 }' 00:13:14.893 23:30:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:14.893 23:30:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:15.152 23:30:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:15.152 23:30:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:15.152 23:30:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:15.152 23:30:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:15.152 23:30:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:15.152 23:30:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:15.152 23:30:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.152 23:30:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:15.152 23:30:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:15.412 23:30:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.412 23:30:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:15.412 "name": "raid_bdev1", 00:13:15.412 "uuid": "07cbec61-df67-431f-b266-7803f82df393", 00:13:15.412 "strip_size_kb": 0, 00:13:15.412 "state": "online", 00:13:15.412 "raid_level": "raid1", 00:13:15.412 "superblock": true, 00:13:15.412 "num_base_bdevs": 4, 00:13:15.412 "num_base_bdevs_discovered": 2, 00:13:15.412 "num_base_bdevs_operational": 2, 00:13:15.412 "base_bdevs_list": [ 00:13:15.412 { 00:13:15.412 "name": null, 00:13:15.412 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:15.412 "is_configured": false, 00:13:15.412 "data_offset": 0, 00:13:15.412 "data_size": 63488 00:13:15.412 }, 00:13:15.412 { 00:13:15.412 "name": null, 00:13:15.412 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:15.412 "is_configured": false, 00:13:15.412 "data_offset": 2048, 00:13:15.412 "data_size": 63488 00:13:15.412 }, 00:13:15.412 { 00:13:15.412 "name": "BaseBdev3", 00:13:15.412 "uuid": "1e3f3d3b-50cc-5c10-8ba7-52a562d98008", 00:13:15.412 "is_configured": true, 00:13:15.412 "data_offset": 2048, 00:13:15.412 "data_size": 63488 00:13:15.412 }, 00:13:15.412 { 00:13:15.412 "name": "BaseBdev4", 00:13:15.412 "uuid": "761a9e60-c6ff-537a-86aa-2137cdde5588", 00:13:15.412 "is_configured": true, 00:13:15.412 "data_offset": 2048, 00:13:15.412 "data_size": 63488 00:13:15.412 } 00:13:15.412 ] 00:13:15.412 }' 00:13:15.412 23:30:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:15.412 23:30:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:15.412 23:30:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:15.412 23:30:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:15.412 23:30:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:13:15.412 23:30:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.412 23:30:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:15.412 23:30:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.412 23:30:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:13:15.412 23:30:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.412 23:30:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:15.412 [2024-09-30 23:30:55.168951] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:13:15.412 [2024-09-30 23:30:55.169004] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:15.412 [2024-09-30 23:30:55.169028] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000cc80 00:13:15.412 [2024-09-30 23:30:55.169039] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:15.412 [2024-09-30 23:30:55.169517] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:15.412 [2024-09-30 23:30:55.169537] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:15.412 [2024-09-30 23:30:55.169622] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:13:15.412 [2024-09-30 23:30:55.169650] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:13:15.412 [2024-09-30 23:30:55.169661] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:13:15.412 [2024-09-30 23:30:55.169672] bdev_raid.c:3884:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:13:15.412 BaseBdev1 00:13:15.412 23:30:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.412 23:30:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@775 -- # sleep 1 00:13:16.387 23:30:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:16.387 23:30:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:16.387 23:30:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:16.387 23:30:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:16.387 23:30:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:16.387 23:30:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:16.387 23:30:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:16.387 23:30:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:16.387 23:30:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:16.387 23:30:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:16.387 23:30:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:16.387 23:30:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:16.387 23:30:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.387 23:30:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:16.387 23:30:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.387 23:30:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:16.387 "name": "raid_bdev1", 00:13:16.387 "uuid": "07cbec61-df67-431f-b266-7803f82df393", 00:13:16.387 "strip_size_kb": 0, 00:13:16.387 "state": "online", 00:13:16.387 "raid_level": "raid1", 00:13:16.387 "superblock": true, 00:13:16.387 "num_base_bdevs": 4, 00:13:16.387 "num_base_bdevs_discovered": 2, 00:13:16.387 "num_base_bdevs_operational": 2, 00:13:16.387 "base_bdevs_list": [ 00:13:16.387 { 00:13:16.387 "name": null, 00:13:16.387 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:16.387 "is_configured": false, 00:13:16.387 "data_offset": 0, 00:13:16.387 "data_size": 63488 00:13:16.387 }, 00:13:16.387 { 00:13:16.387 "name": null, 00:13:16.387 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:16.387 "is_configured": false, 00:13:16.387 "data_offset": 2048, 00:13:16.387 "data_size": 63488 00:13:16.387 }, 00:13:16.387 { 00:13:16.387 "name": "BaseBdev3", 00:13:16.387 "uuid": "1e3f3d3b-50cc-5c10-8ba7-52a562d98008", 00:13:16.387 "is_configured": true, 00:13:16.387 "data_offset": 2048, 00:13:16.387 "data_size": 63488 00:13:16.387 }, 00:13:16.387 { 00:13:16.387 "name": "BaseBdev4", 00:13:16.387 "uuid": "761a9e60-c6ff-537a-86aa-2137cdde5588", 00:13:16.387 "is_configured": true, 00:13:16.387 "data_offset": 2048, 00:13:16.387 "data_size": 63488 00:13:16.387 } 00:13:16.387 ] 00:13:16.387 }' 00:13:16.387 23:30:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:16.387 23:30:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:16.958 23:30:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:16.958 23:30:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:16.958 23:30:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:16.958 23:30:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:16.958 23:30:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:16.958 23:30:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:16.958 23:30:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.958 23:30:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:16.958 23:30:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:16.958 23:30:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.958 23:30:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:16.958 "name": "raid_bdev1", 00:13:16.958 "uuid": "07cbec61-df67-431f-b266-7803f82df393", 00:13:16.958 "strip_size_kb": 0, 00:13:16.958 "state": "online", 00:13:16.958 "raid_level": "raid1", 00:13:16.958 "superblock": true, 00:13:16.958 "num_base_bdevs": 4, 00:13:16.958 "num_base_bdevs_discovered": 2, 00:13:16.958 "num_base_bdevs_operational": 2, 00:13:16.958 "base_bdevs_list": [ 00:13:16.958 { 00:13:16.958 "name": null, 00:13:16.958 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:16.958 "is_configured": false, 00:13:16.958 "data_offset": 0, 00:13:16.958 "data_size": 63488 00:13:16.958 }, 00:13:16.958 { 00:13:16.958 "name": null, 00:13:16.958 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:16.958 "is_configured": false, 00:13:16.958 "data_offset": 2048, 00:13:16.958 "data_size": 63488 00:13:16.958 }, 00:13:16.958 { 00:13:16.958 "name": "BaseBdev3", 00:13:16.958 "uuid": "1e3f3d3b-50cc-5c10-8ba7-52a562d98008", 00:13:16.958 "is_configured": true, 00:13:16.958 "data_offset": 2048, 00:13:16.958 "data_size": 63488 00:13:16.958 }, 00:13:16.958 { 00:13:16.958 "name": "BaseBdev4", 00:13:16.958 "uuid": "761a9e60-c6ff-537a-86aa-2137cdde5588", 00:13:16.958 "is_configured": true, 00:13:16.958 "data_offset": 2048, 00:13:16.958 "data_size": 63488 00:13:16.958 } 00:13:16.958 ] 00:13:16.958 }' 00:13:16.958 23:30:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:16.958 23:30:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:16.958 23:30:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:16.958 23:30:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:16.958 23:30:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:13:16.958 23:30:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@650 -- # local es=0 00:13:16.958 23:30:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:13:16.958 23:30:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:13:16.958 23:30:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:16.958 23:30:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:13:16.959 23:30:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:16.959 23:30:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:13:16.959 23:30:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.959 23:30:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:16.959 [2024-09-30 23:30:56.762977] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:16.959 [2024-09-30 23:30:56.763165] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:13:16.959 [2024-09-30 23:30:56.763182] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:13:16.959 request: 00:13:16.959 { 00:13:16.959 "base_bdev": "BaseBdev1", 00:13:16.959 "raid_bdev": "raid_bdev1", 00:13:16.959 "method": "bdev_raid_add_base_bdev", 00:13:16.959 "req_id": 1 00:13:16.959 } 00:13:16.959 Got JSON-RPC error response 00:13:16.959 response: 00:13:16.959 { 00:13:16.959 "code": -22, 00:13:16.959 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:13:16.959 } 00:13:16.959 23:30:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:13:16.959 23:30:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@653 -- # es=1 00:13:16.959 23:30:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:16.959 23:30:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:16.959 23:30:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:16.959 23:30:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@779 -- # sleep 1 00:13:18.339 23:30:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:18.339 23:30:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:18.339 23:30:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:18.339 23:30:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:18.339 23:30:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:18.339 23:30:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:18.339 23:30:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:18.339 23:30:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:18.339 23:30:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:18.339 23:30:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:18.339 23:30:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:18.339 23:30:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:18.339 23:30:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:18.339 23:30:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:18.339 23:30:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:18.339 23:30:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:18.339 "name": "raid_bdev1", 00:13:18.339 "uuid": "07cbec61-df67-431f-b266-7803f82df393", 00:13:18.339 "strip_size_kb": 0, 00:13:18.339 "state": "online", 00:13:18.339 "raid_level": "raid1", 00:13:18.339 "superblock": true, 00:13:18.339 "num_base_bdevs": 4, 00:13:18.339 "num_base_bdevs_discovered": 2, 00:13:18.339 "num_base_bdevs_operational": 2, 00:13:18.339 "base_bdevs_list": [ 00:13:18.339 { 00:13:18.339 "name": null, 00:13:18.339 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:18.339 "is_configured": false, 00:13:18.339 "data_offset": 0, 00:13:18.339 "data_size": 63488 00:13:18.339 }, 00:13:18.339 { 00:13:18.339 "name": null, 00:13:18.339 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:18.339 "is_configured": false, 00:13:18.339 "data_offset": 2048, 00:13:18.339 "data_size": 63488 00:13:18.339 }, 00:13:18.339 { 00:13:18.339 "name": "BaseBdev3", 00:13:18.339 "uuid": "1e3f3d3b-50cc-5c10-8ba7-52a562d98008", 00:13:18.339 "is_configured": true, 00:13:18.339 "data_offset": 2048, 00:13:18.339 "data_size": 63488 00:13:18.339 }, 00:13:18.339 { 00:13:18.339 "name": "BaseBdev4", 00:13:18.339 "uuid": "761a9e60-c6ff-537a-86aa-2137cdde5588", 00:13:18.339 "is_configured": true, 00:13:18.339 "data_offset": 2048, 00:13:18.339 "data_size": 63488 00:13:18.339 } 00:13:18.339 ] 00:13:18.339 }' 00:13:18.339 23:30:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:18.339 23:30:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:18.598 23:30:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:18.598 23:30:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:18.598 23:30:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:18.598 23:30:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:18.598 23:30:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:18.598 23:30:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:18.598 23:30:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:18.598 23:30:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:18.598 23:30:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:18.598 23:30:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:18.598 23:30:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:18.598 "name": "raid_bdev1", 00:13:18.598 "uuid": "07cbec61-df67-431f-b266-7803f82df393", 00:13:18.598 "strip_size_kb": 0, 00:13:18.598 "state": "online", 00:13:18.598 "raid_level": "raid1", 00:13:18.598 "superblock": true, 00:13:18.598 "num_base_bdevs": 4, 00:13:18.598 "num_base_bdevs_discovered": 2, 00:13:18.598 "num_base_bdevs_operational": 2, 00:13:18.598 "base_bdevs_list": [ 00:13:18.598 { 00:13:18.598 "name": null, 00:13:18.598 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:18.598 "is_configured": false, 00:13:18.598 "data_offset": 0, 00:13:18.598 "data_size": 63488 00:13:18.598 }, 00:13:18.598 { 00:13:18.598 "name": null, 00:13:18.598 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:18.598 "is_configured": false, 00:13:18.598 "data_offset": 2048, 00:13:18.598 "data_size": 63488 00:13:18.598 }, 00:13:18.598 { 00:13:18.598 "name": "BaseBdev3", 00:13:18.598 "uuid": "1e3f3d3b-50cc-5c10-8ba7-52a562d98008", 00:13:18.598 "is_configured": true, 00:13:18.598 "data_offset": 2048, 00:13:18.598 "data_size": 63488 00:13:18.598 }, 00:13:18.598 { 00:13:18.598 "name": "BaseBdev4", 00:13:18.598 "uuid": "761a9e60-c6ff-537a-86aa-2137cdde5588", 00:13:18.598 "is_configured": true, 00:13:18.598 "data_offset": 2048, 00:13:18.598 "data_size": 63488 00:13:18.598 } 00:13:18.598 ] 00:13:18.598 }' 00:13:18.598 23:30:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:18.598 23:30:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:18.598 23:30:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:18.598 23:30:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:18.598 23:30:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@784 -- # killprocess 89791 00:13:18.598 23:30:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@950 -- # '[' -z 89791 ']' 00:13:18.598 23:30:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@954 -- # kill -0 89791 00:13:18.598 23:30:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@955 -- # uname 00:13:18.598 23:30:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:18.598 23:30:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 89791 00:13:18.598 23:30:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:18.599 23:30:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:18.599 killing process with pid 89791 00:13:18.599 23:30:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@968 -- # echo 'killing process with pid 89791' 00:13:18.599 23:30:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@969 -- # kill 89791 00:13:18.599 Received shutdown signal, test time was about 17.698855 seconds 00:13:18.599 00:13:18.599 Latency(us) 00:13:18.599 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:18.599 =================================================================================================================== 00:13:18.599 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:18.599 [2024-09-30 23:30:58.409882] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:18.599 [2024-09-30 23:30:58.410030] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:18.599 23:30:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@974 -- # wait 89791 00:13:18.599 [2024-09-30 23:30:58.410127] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:18.599 [2024-09-30 23:30:58.410145] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state offline 00:13:18.858 [2024-09-30 23:30:58.457136] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:18.858 23:30:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@786 -- # return 0 00:13:18.858 00:13:18.858 real 0m19.690s 00:13:18.858 user 0m25.965s 00:13:18.858 sys 0m2.731s 00:13:18.858 23:30:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:18.858 23:30:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:18.858 ************************************ 00:13:18.858 END TEST raid_rebuild_test_sb_io 00:13:18.858 ************************************ 00:13:19.118 23:30:58 bdev_raid -- bdev/bdev_raid.sh@985 -- # for n in {3..4} 00:13:19.118 23:30:58 bdev_raid -- bdev/bdev_raid.sh@986 -- # run_test raid5f_state_function_test raid_state_function_test raid5f 3 false 00:13:19.118 23:30:58 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:13:19.118 23:30:58 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:19.118 23:30:58 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:19.118 ************************************ 00:13:19.118 START TEST raid5f_state_function_test 00:13:19.118 ************************************ 00:13:19.118 23:30:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test raid5f 3 false 00:13:19.118 23:30:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:13:19.118 23:30:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:13:19.118 23:30:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:13:19.118 23:30:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:13:19.118 23:30:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:13:19.118 23:30:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:19.118 23:30:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:13:19.118 23:30:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:19.118 23:30:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:19.118 23:30:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:13:19.118 23:30:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:19.118 23:30:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:19.118 23:30:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:13:19.118 23:30:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:19.118 23:30:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:19.118 23:30:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:13:19.118 23:30:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:13:19.118 23:30:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:13:19.118 23:30:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:13:19.118 23:30:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:13:19.118 23:30:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:13:19.118 23:30:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:13:19.118 23:30:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:13:19.118 23:30:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:13:19.118 23:30:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:13:19.118 23:30:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:13:19.118 23:30:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=90504 00:13:19.118 23:30:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:13:19.118 Process raid pid: 90504 00:13:19.118 23:30:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 90504' 00:13:19.118 23:30:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 90504 00:13:19.118 23:30:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 90504 ']' 00:13:19.118 23:30:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:19.118 23:30:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:19.118 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:19.118 23:30:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:19.118 23:30:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:19.118 23:30:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:19.118 [2024-09-30 23:30:58.876575] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:13:19.118 [2024-09-30 23:30:58.876714] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:19.376 [2024-09-30 23:30:59.040043] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:19.376 [2024-09-30 23:30:59.087870] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:13:19.376 [2024-09-30 23:30:59.131679] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:19.376 [2024-09-30 23:30:59.131729] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:19.945 23:30:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:19.945 23:30:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:13:19.945 23:30:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:13:19.945 23:30:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:19.945 23:30:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:19.945 [2024-09-30 23:30:59.713872] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:19.945 [2024-09-30 23:30:59.713929] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:19.945 [2024-09-30 23:30:59.713946] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:19.945 [2024-09-30 23:30:59.713959] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:19.945 [2024-09-30 23:30:59.713968] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:19.945 [2024-09-30 23:30:59.713982] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:19.945 23:30:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:19.945 23:30:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:13:19.945 23:30:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:19.945 23:30:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:19.945 23:30:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:19.945 23:30:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:19.945 23:30:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:19.945 23:30:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:19.945 23:30:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:19.945 23:30:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:19.945 23:30:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:19.945 23:30:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:19.945 23:30:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:19.945 23:30:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:19.945 23:30:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:19.945 23:30:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:19.945 23:30:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:19.945 "name": "Existed_Raid", 00:13:19.945 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:19.945 "strip_size_kb": 64, 00:13:19.945 "state": "configuring", 00:13:19.945 "raid_level": "raid5f", 00:13:19.945 "superblock": false, 00:13:19.945 "num_base_bdevs": 3, 00:13:19.945 "num_base_bdevs_discovered": 0, 00:13:19.945 "num_base_bdevs_operational": 3, 00:13:19.945 "base_bdevs_list": [ 00:13:19.945 { 00:13:19.945 "name": "BaseBdev1", 00:13:19.945 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:19.945 "is_configured": false, 00:13:19.945 "data_offset": 0, 00:13:19.945 "data_size": 0 00:13:19.945 }, 00:13:19.945 { 00:13:19.945 "name": "BaseBdev2", 00:13:19.945 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:19.945 "is_configured": false, 00:13:19.945 "data_offset": 0, 00:13:19.945 "data_size": 0 00:13:19.945 }, 00:13:19.945 { 00:13:19.945 "name": "BaseBdev3", 00:13:19.945 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:19.945 "is_configured": false, 00:13:19.945 "data_offset": 0, 00:13:19.945 "data_size": 0 00:13:19.945 } 00:13:19.945 ] 00:13:19.945 }' 00:13:19.945 23:30:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:19.945 23:30:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:20.514 23:31:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:20.514 23:31:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.514 23:31:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:20.514 [2024-09-30 23:31:00.164931] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:20.514 [2024-09-30 23:31:00.164981] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring 00:13:20.514 23:31:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:20.514 23:31:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:13:20.514 23:31:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.514 23:31:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:20.514 [2024-09-30 23:31:00.176949] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:20.514 [2024-09-30 23:31:00.176993] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:20.514 [2024-09-30 23:31:00.177003] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:20.514 [2024-09-30 23:31:00.177014] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:20.514 [2024-09-30 23:31:00.177022] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:20.514 [2024-09-30 23:31:00.177033] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:20.514 23:31:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:20.514 23:31:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:13:20.514 23:31:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.514 23:31:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:20.514 [2024-09-30 23:31:00.198110] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:20.514 BaseBdev1 00:13:20.514 23:31:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:20.514 23:31:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:13:20.514 23:31:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:13:20.514 23:31:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:20.514 23:31:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:13:20.514 23:31:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:20.514 23:31:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:20.514 23:31:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:13:20.514 23:31:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.514 23:31:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:20.514 23:31:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:20.514 23:31:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:20.514 23:31:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.514 23:31:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:20.514 [ 00:13:20.514 { 00:13:20.514 "name": "BaseBdev1", 00:13:20.514 "aliases": [ 00:13:20.514 "d8626881-8b2b-4acd-a868-a8ae41eeba92" 00:13:20.514 ], 00:13:20.514 "product_name": "Malloc disk", 00:13:20.514 "block_size": 512, 00:13:20.514 "num_blocks": 65536, 00:13:20.514 "uuid": "d8626881-8b2b-4acd-a868-a8ae41eeba92", 00:13:20.514 "assigned_rate_limits": { 00:13:20.514 "rw_ios_per_sec": 0, 00:13:20.514 "rw_mbytes_per_sec": 0, 00:13:20.514 "r_mbytes_per_sec": 0, 00:13:20.514 "w_mbytes_per_sec": 0 00:13:20.514 }, 00:13:20.514 "claimed": true, 00:13:20.514 "claim_type": "exclusive_write", 00:13:20.514 "zoned": false, 00:13:20.514 "supported_io_types": { 00:13:20.514 "read": true, 00:13:20.514 "write": true, 00:13:20.514 "unmap": true, 00:13:20.514 "flush": true, 00:13:20.514 "reset": true, 00:13:20.514 "nvme_admin": false, 00:13:20.514 "nvme_io": false, 00:13:20.514 "nvme_io_md": false, 00:13:20.514 "write_zeroes": true, 00:13:20.514 "zcopy": true, 00:13:20.514 "get_zone_info": false, 00:13:20.514 "zone_management": false, 00:13:20.514 "zone_append": false, 00:13:20.514 "compare": false, 00:13:20.514 "compare_and_write": false, 00:13:20.514 "abort": true, 00:13:20.514 "seek_hole": false, 00:13:20.514 "seek_data": false, 00:13:20.514 "copy": true, 00:13:20.514 "nvme_iov_md": false 00:13:20.514 }, 00:13:20.514 "memory_domains": [ 00:13:20.514 { 00:13:20.514 "dma_device_id": "system", 00:13:20.514 "dma_device_type": 1 00:13:20.514 }, 00:13:20.514 { 00:13:20.514 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:20.514 "dma_device_type": 2 00:13:20.514 } 00:13:20.514 ], 00:13:20.514 "driver_specific": {} 00:13:20.514 } 00:13:20.514 ] 00:13:20.514 23:31:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:20.514 23:31:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:13:20.514 23:31:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:13:20.514 23:31:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:20.514 23:31:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:20.514 23:31:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:20.514 23:31:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:20.514 23:31:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:20.514 23:31:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:20.514 23:31:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:20.514 23:31:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:20.514 23:31:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:20.514 23:31:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:20.514 23:31:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.514 23:31:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:20.514 23:31:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:20.514 23:31:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:20.514 23:31:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:20.514 "name": "Existed_Raid", 00:13:20.514 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:20.514 "strip_size_kb": 64, 00:13:20.514 "state": "configuring", 00:13:20.514 "raid_level": "raid5f", 00:13:20.514 "superblock": false, 00:13:20.514 "num_base_bdevs": 3, 00:13:20.514 "num_base_bdevs_discovered": 1, 00:13:20.514 "num_base_bdevs_operational": 3, 00:13:20.514 "base_bdevs_list": [ 00:13:20.514 { 00:13:20.514 "name": "BaseBdev1", 00:13:20.514 "uuid": "d8626881-8b2b-4acd-a868-a8ae41eeba92", 00:13:20.514 "is_configured": true, 00:13:20.514 "data_offset": 0, 00:13:20.514 "data_size": 65536 00:13:20.514 }, 00:13:20.514 { 00:13:20.514 "name": "BaseBdev2", 00:13:20.514 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:20.514 "is_configured": false, 00:13:20.514 "data_offset": 0, 00:13:20.514 "data_size": 0 00:13:20.514 }, 00:13:20.514 { 00:13:20.514 "name": "BaseBdev3", 00:13:20.514 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:20.514 "is_configured": false, 00:13:20.514 "data_offset": 0, 00:13:20.515 "data_size": 0 00:13:20.515 } 00:13:20.515 ] 00:13:20.515 }' 00:13:20.515 23:31:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:20.515 23:31:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:21.084 23:31:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:21.084 23:31:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.084 23:31:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:21.084 [2024-09-30 23:31:00.721245] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:21.084 [2024-09-30 23:31:00.721295] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring 00:13:21.084 23:31:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.084 23:31:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:13:21.084 23:31:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.084 23:31:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:21.084 [2024-09-30 23:31:00.729270] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:21.084 [2024-09-30 23:31:00.731271] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:21.084 [2024-09-30 23:31:00.731323] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:21.084 [2024-09-30 23:31:00.731336] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:21.084 [2024-09-30 23:31:00.731349] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:21.084 23:31:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.084 23:31:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:13:21.084 23:31:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:21.084 23:31:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:13:21.084 23:31:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:21.084 23:31:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:21.084 23:31:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:21.084 23:31:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:21.084 23:31:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:21.084 23:31:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:21.084 23:31:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:21.084 23:31:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:21.084 23:31:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:21.084 23:31:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:21.084 23:31:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:21.084 23:31:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.084 23:31:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:21.084 23:31:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.084 23:31:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:21.084 "name": "Existed_Raid", 00:13:21.084 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:21.084 "strip_size_kb": 64, 00:13:21.084 "state": "configuring", 00:13:21.084 "raid_level": "raid5f", 00:13:21.084 "superblock": false, 00:13:21.084 "num_base_bdevs": 3, 00:13:21.084 "num_base_bdevs_discovered": 1, 00:13:21.084 "num_base_bdevs_operational": 3, 00:13:21.084 "base_bdevs_list": [ 00:13:21.084 { 00:13:21.084 "name": "BaseBdev1", 00:13:21.084 "uuid": "d8626881-8b2b-4acd-a868-a8ae41eeba92", 00:13:21.084 "is_configured": true, 00:13:21.084 "data_offset": 0, 00:13:21.084 "data_size": 65536 00:13:21.084 }, 00:13:21.084 { 00:13:21.084 "name": "BaseBdev2", 00:13:21.084 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:21.084 "is_configured": false, 00:13:21.084 "data_offset": 0, 00:13:21.084 "data_size": 0 00:13:21.084 }, 00:13:21.084 { 00:13:21.084 "name": "BaseBdev3", 00:13:21.084 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:21.085 "is_configured": false, 00:13:21.085 "data_offset": 0, 00:13:21.085 "data_size": 0 00:13:21.085 } 00:13:21.085 ] 00:13:21.085 }' 00:13:21.085 23:31:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:21.085 23:31:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:21.654 23:31:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:13:21.654 23:31:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.654 23:31:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:21.654 [2024-09-30 23:31:01.225285] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:21.654 BaseBdev2 00:13:21.654 23:31:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.654 23:31:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:13:21.654 23:31:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:13:21.654 23:31:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:21.654 23:31:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:13:21.654 23:31:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:21.654 23:31:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:21.654 23:31:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:13:21.654 23:31:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.654 23:31:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:21.654 23:31:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.654 23:31:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:21.654 23:31:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.654 23:31:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:21.654 [ 00:13:21.654 { 00:13:21.654 "name": "BaseBdev2", 00:13:21.654 "aliases": [ 00:13:21.654 "47150eeb-f500-4fc3-b50a-dd568fe1ab87" 00:13:21.654 ], 00:13:21.654 "product_name": "Malloc disk", 00:13:21.654 "block_size": 512, 00:13:21.654 "num_blocks": 65536, 00:13:21.654 "uuid": "47150eeb-f500-4fc3-b50a-dd568fe1ab87", 00:13:21.654 "assigned_rate_limits": { 00:13:21.654 "rw_ios_per_sec": 0, 00:13:21.654 "rw_mbytes_per_sec": 0, 00:13:21.654 "r_mbytes_per_sec": 0, 00:13:21.654 "w_mbytes_per_sec": 0 00:13:21.654 }, 00:13:21.654 "claimed": true, 00:13:21.654 "claim_type": "exclusive_write", 00:13:21.654 "zoned": false, 00:13:21.654 "supported_io_types": { 00:13:21.654 "read": true, 00:13:21.654 "write": true, 00:13:21.654 "unmap": true, 00:13:21.654 "flush": true, 00:13:21.654 "reset": true, 00:13:21.654 "nvme_admin": false, 00:13:21.654 "nvme_io": false, 00:13:21.654 "nvme_io_md": false, 00:13:21.654 "write_zeroes": true, 00:13:21.654 "zcopy": true, 00:13:21.654 "get_zone_info": false, 00:13:21.654 "zone_management": false, 00:13:21.654 "zone_append": false, 00:13:21.654 "compare": false, 00:13:21.654 "compare_and_write": false, 00:13:21.654 "abort": true, 00:13:21.654 "seek_hole": false, 00:13:21.654 "seek_data": false, 00:13:21.654 "copy": true, 00:13:21.654 "nvme_iov_md": false 00:13:21.654 }, 00:13:21.654 "memory_domains": [ 00:13:21.654 { 00:13:21.654 "dma_device_id": "system", 00:13:21.654 "dma_device_type": 1 00:13:21.654 }, 00:13:21.654 { 00:13:21.654 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:21.654 "dma_device_type": 2 00:13:21.654 } 00:13:21.654 ], 00:13:21.654 "driver_specific": {} 00:13:21.654 } 00:13:21.654 ] 00:13:21.654 23:31:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.654 23:31:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:13:21.654 23:31:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:21.654 23:31:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:21.654 23:31:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:13:21.654 23:31:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:21.654 23:31:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:21.654 23:31:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:21.654 23:31:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:21.654 23:31:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:21.654 23:31:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:21.654 23:31:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:21.654 23:31:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:21.654 23:31:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:21.654 23:31:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:21.654 23:31:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.654 23:31:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:21.654 23:31:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:21.654 23:31:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.654 23:31:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:21.654 "name": "Existed_Raid", 00:13:21.654 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:21.654 "strip_size_kb": 64, 00:13:21.654 "state": "configuring", 00:13:21.654 "raid_level": "raid5f", 00:13:21.654 "superblock": false, 00:13:21.654 "num_base_bdevs": 3, 00:13:21.655 "num_base_bdevs_discovered": 2, 00:13:21.655 "num_base_bdevs_operational": 3, 00:13:21.655 "base_bdevs_list": [ 00:13:21.655 { 00:13:21.655 "name": "BaseBdev1", 00:13:21.655 "uuid": "d8626881-8b2b-4acd-a868-a8ae41eeba92", 00:13:21.655 "is_configured": true, 00:13:21.655 "data_offset": 0, 00:13:21.655 "data_size": 65536 00:13:21.655 }, 00:13:21.655 { 00:13:21.655 "name": "BaseBdev2", 00:13:21.655 "uuid": "47150eeb-f500-4fc3-b50a-dd568fe1ab87", 00:13:21.655 "is_configured": true, 00:13:21.655 "data_offset": 0, 00:13:21.655 "data_size": 65536 00:13:21.655 }, 00:13:21.655 { 00:13:21.655 "name": "BaseBdev3", 00:13:21.655 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:21.655 "is_configured": false, 00:13:21.655 "data_offset": 0, 00:13:21.655 "data_size": 0 00:13:21.655 } 00:13:21.655 ] 00:13:21.655 }' 00:13:21.655 23:31:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:21.655 23:31:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:21.915 23:31:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:13:21.915 23:31:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.915 23:31:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:21.915 [2024-09-30 23:31:01.687810] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:21.915 [2024-09-30 23:31:01.687914] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:13:21.915 [2024-09-30 23:31:01.687930] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:13:21.915 [2024-09-30 23:31:01.688274] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:13:21.915 [2024-09-30 23:31:01.688731] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:13:21.915 [2024-09-30 23:31:01.688754] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980 00:13:21.915 [2024-09-30 23:31:01.689010] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:21.915 BaseBdev3 00:13:21.915 23:31:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.915 23:31:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:13:21.915 23:31:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:13:21.915 23:31:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:21.915 23:31:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:13:21.915 23:31:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:21.915 23:31:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:21.915 23:31:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:13:21.915 23:31:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.915 23:31:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:21.915 23:31:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.915 23:31:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:13:21.915 23:31:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.915 23:31:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:21.915 [ 00:13:21.915 { 00:13:21.915 "name": "BaseBdev3", 00:13:21.915 "aliases": [ 00:13:21.915 "30ce9e95-33a6-43cc-92c0-83589c2363c9" 00:13:21.915 ], 00:13:21.915 "product_name": "Malloc disk", 00:13:21.915 "block_size": 512, 00:13:21.915 "num_blocks": 65536, 00:13:21.915 "uuid": "30ce9e95-33a6-43cc-92c0-83589c2363c9", 00:13:21.915 "assigned_rate_limits": { 00:13:21.915 "rw_ios_per_sec": 0, 00:13:21.915 "rw_mbytes_per_sec": 0, 00:13:21.915 "r_mbytes_per_sec": 0, 00:13:21.915 "w_mbytes_per_sec": 0 00:13:21.915 }, 00:13:21.915 "claimed": true, 00:13:21.915 "claim_type": "exclusive_write", 00:13:21.915 "zoned": false, 00:13:21.915 "supported_io_types": { 00:13:21.915 "read": true, 00:13:21.915 "write": true, 00:13:21.915 "unmap": true, 00:13:21.915 "flush": true, 00:13:21.915 "reset": true, 00:13:21.915 "nvme_admin": false, 00:13:21.915 "nvme_io": false, 00:13:21.915 "nvme_io_md": false, 00:13:21.915 "write_zeroes": true, 00:13:21.915 "zcopy": true, 00:13:21.915 "get_zone_info": false, 00:13:21.915 "zone_management": false, 00:13:21.915 "zone_append": false, 00:13:21.915 "compare": false, 00:13:21.915 "compare_and_write": false, 00:13:21.915 "abort": true, 00:13:21.915 "seek_hole": false, 00:13:21.915 "seek_data": false, 00:13:21.915 "copy": true, 00:13:21.915 "nvme_iov_md": false 00:13:21.915 }, 00:13:21.915 "memory_domains": [ 00:13:21.915 { 00:13:21.915 "dma_device_id": "system", 00:13:21.915 "dma_device_type": 1 00:13:21.915 }, 00:13:21.915 { 00:13:21.915 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:21.915 "dma_device_type": 2 00:13:21.915 } 00:13:21.915 ], 00:13:21.915 "driver_specific": {} 00:13:21.915 } 00:13:21.915 ] 00:13:21.915 23:31:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.915 23:31:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:13:21.915 23:31:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:21.915 23:31:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:21.915 23:31:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:13:21.915 23:31:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:21.915 23:31:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:21.915 23:31:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:21.915 23:31:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:21.915 23:31:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:21.915 23:31:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:21.915 23:31:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:21.915 23:31:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:21.915 23:31:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:21.915 23:31:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:21.915 23:31:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:21.915 23:31:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.915 23:31:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:21.915 23:31:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.915 23:31:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:21.915 "name": "Existed_Raid", 00:13:21.915 "uuid": "02f6b657-36a4-4718-b0d6-988126c39437", 00:13:21.915 "strip_size_kb": 64, 00:13:21.915 "state": "online", 00:13:21.915 "raid_level": "raid5f", 00:13:21.915 "superblock": false, 00:13:21.915 "num_base_bdevs": 3, 00:13:21.915 "num_base_bdevs_discovered": 3, 00:13:21.915 "num_base_bdevs_operational": 3, 00:13:21.915 "base_bdevs_list": [ 00:13:21.915 { 00:13:21.915 "name": "BaseBdev1", 00:13:21.915 "uuid": "d8626881-8b2b-4acd-a868-a8ae41eeba92", 00:13:21.915 "is_configured": true, 00:13:21.915 "data_offset": 0, 00:13:21.915 "data_size": 65536 00:13:21.915 }, 00:13:21.915 { 00:13:21.915 "name": "BaseBdev2", 00:13:21.915 "uuid": "47150eeb-f500-4fc3-b50a-dd568fe1ab87", 00:13:21.915 "is_configured": true, 00:13:21.915 "data_offset": 0, 00:13:21.915 "data_size": 65536 00:13:21.915 }, 00:13:21.915 { 00:13:21.915 "name": "BaseBdev3", 00:13:21.915 "uuid": "30ce9e95-33a6-43cc-92c0-83589c2363c9", 00:13:21.915 "is_configured": true, 00:13:21.915 "data_offset": 0, 00:13:21.915 "data_size": 65536 00:13:21.915 } 00:13:21.915 ] 00:13:21.915 }' 00:13:21.915 23:31:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:21.915 23:31:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:22.512 23:31:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:13:22.512 23:31:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:13:22.512 23:31:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:22.512 23:31:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:22.512 23:31:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:13:22.512 23:31:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:22.512 23:31:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:13:22.512 23:31:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:22.512 23:31:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:22.512 23:31:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:22.512 [2024-09-30 23:31:02.155255] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:22.512 23:31:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:22.512 23:31:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:22.512 "name": "Existed_Raid", 00:13:22.512 "aliases": [ 00:13:22.512 "02f6b657-36a4-4718-b0d6-988126c39437" 00:13:22.512 ], 00:13:22.512 "product_name": "Raid Volume", 00:13:22.512 "block_size": 512, 00:13:22.512 "num_blocks": 131072, 00:13:22.512 "uuid": "02f6b657-36a4-4718-b0d6-988126c39437", 00:13:22.512 "assigned_rate_limits": { 00:13:22.512 "rw_ios_per_sec": 0, 00:13:22.512 "rw_mbytes_per_sec": 0, 00:13:22.512 "r_mbytes_per_sec": 0, 00:13:22.512 "w_mbytes_per_sec": 0 00:13:22.512 }, 00:13:22.512 "claimed": false, 00:13:22.512 "zoned": false, 00:13:22.512 "supported_io_types": { 00:13:22.512 "read": true, 00:13:22.512 "write": true, 00:13:22.512 "unmap": false, 00:13:22.512 "flush": false, 00:13:22.512 "reset": true, 00:13:22.512 "nvme_admin": false, 00:13:22.512 "nvme_io": false, 00:13:22.512 "nvme_io_md": false, 00:13:22.512 "write_zeroes": true, 00:13:22.512 "zcopy": false, 00:13:22.512 "get_zone_info": false, 00:13:22.512 "zone_management": false, 00:13:22.512 "zone_append": false, 00:13:22.512 "compare": false, 00:13:22.512 "compare_and_write": false, 00:13:22.512 "abort": false, 00:13:22.512 "seek_hole": false, 00:13:22.512 "seek_data": false, 00:13:22.512 "copy": false, 00:13:22.512 "nvme_iov_md": false 00:13:22.512 }, 00:13:22.512 "driver_specific": { 00:13:22.512 "raid": { 00:13:22.512 "uuid": "02f6b657-36a4-4718-b0d6-988126c39437", 00:13:22.512 "strip_size_kb": 64, 00:13:22.512 "state": "online", 00:13:22.512 "raid_level": "raid5f", 00:13:22.512 "superblock": false, 00:13:22.512 "num_base_bdevs": 3, 00:13:22.512 "num_base_bdevs_discovered": 3, 00:13:22.512 "num_base_bdevs_operational": 3, 00:13:22.512 "base_bdevs_list": [ 00:13:22.512 { 00:13:22.512 "name": "BaseBdev1", 00:13:22.512 "uuid": "d8626881-8b2b-4acd-a868-a8ae41eeba92", 00:13:22.512 "is_configured": true, 00:13:22.512 "data_offset": 0, 00:13:22.512 "data_size": 65536 00:13:22.512 }, 00:13:22.512 { 00:13:22.512 "name": "BaseBdev2", 00:13:22.513 "uuid": "47150eeb-f500-4fc3-b50a-dd568fe1ab87", 00:13:22.513 "is_configured": true, 00:13:22.513 "data_offset": 0, 00:13:22.513 "data_size": 65536 00:13:22.513 }, 00:13:22.513 { 00:13:22.513 "name": "BaseBdev3", 00:13:22.513 "uuid": "30ce9e95-33a6-43cc-92c0-83589c2363c9", 00:13:22.513 "is_configured": true, 00:13:22.513 "data_offset": 0, 00:13:22.513 "data_size": 65536 00:13:22.513 } 00:13:22.513 ] 00:13:22.513 } 00:13:22.513 } 00:13:22.513 }' 00:13:22.513 23:31:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:22.513 23:31:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:13:22.513 BaseBdev2 00:13:22.513 BaseBdev3' 00:13:22.513 23:31:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:22.513 23:31:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:22.513 23:31:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:22.513 23:31:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:13:22.513 23:31:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:22.513 23:31:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:22.513 23:31:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:22.513 23:31:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:22.513 23:31:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:22.513 23:31:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:22.513 23:31:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:22.513 23:31:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:22.513 23:31:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:13:22.513 23:31:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:22.513 23:31:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:22.513 23:31:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:22.772 23:31:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:22.772 23:31:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:22.772 23:31:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:22.772 23:31:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:13:22.772 23:31:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:22.772 23:31:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:22.772 23:31:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:22.772 23:31:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:22.772 23:31:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:22.772 23:31:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:22.772 23:31:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:13:22.772 23:31:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:22.772 23:31:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:22.772 [2024-09-30 23:31:02.422686] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:22.772 23:31:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:22.772 23:31:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:13:22.772 23:31:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:13:22.772 23:31:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:22.772 23:31:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:13:22.772 23:31:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:13:22.772 23:31:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 2 00:13:22.772 23:31:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:22.772 23:31:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:22.772 23:31:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:22.772 23:31:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:22.772 23:31:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:22.772 23:31:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:22.772 23:31:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:22.772 23:31:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:22.772 23:31:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:22.772 23:31:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:22.772 23:31:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:22.772 23:31:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:22.772 23:31:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:22.772 23:31:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:22.772 23:31:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:22.772 "name": "Existed_Raid", 00:13:22.772 "uuid": "02f6b657-36a4-4718-b0d6-988126c39437", 00:13:22.772 "strip_size_kb": 64, 00:13:22.772 "state": "online", 00:13:22.772 "raid_level": "raid5f", 00:13:22.772 "superblock": false, 00:13:22.772 "num_base_bdevs": 3, 00:13:22.772 "num_base_bdevs_discovered": 2, 00:13:22.772 "num_base_bdevs_operational": 2, 00:13:22.772 "base_bdevs_list": [ 00:13:22.772 { 00:13:22.772 "name": null, 00:13:22.772 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:22.772 "is_configured": false, 00:13:22.772 "data_offset": 0, 00:13:22.772 "data_size": 65536 00:13:22.772 }, 00:13:22.772 { 00:13:22.772 "name": "BaseBdev2", 00:13:22.772 "uuid": "47150eeb-f500-4fc3-b50a-dd568fe1ab87", 00:13:22.772 "is_configured": true, 00:13:22.772 "data_offset": 0, 00:13:22.772 "data_size": 65536 00:13:22.772 }, 00:13:22.772 { 00:13:22.772 "name": "BaseBdev3", 00:13:22.772 "uuid": "30ce9e95-33a6-43cc-92c0-83589c2363c9", 00:13:22.772 "is_configured": true, 00:13:22.772 "data_offset": 0, 00:13:22.772 "data_size": 65536 00:13:22.772 } 00:13:22.772 ] 00:13:22.772 }' 00:13:22.772 23:31:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:22.772 23:31:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:23.032 23:31:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:13:23.032 23:31:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:23.032 23:31:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:23.032 23:31:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:23.032 23:31:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:23.032 23:31:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:23.293 23:31:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:23.293 23:31:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:23.293 23:31:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:23.293 23:31:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:13:23.293 23:31:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:23.293 23:31:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:23.293 [2024-09-30 23:31:02.905306] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:23.293 [2024-09-30 23:31:02.905412] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:23.293 [2024-09-30 23:31:02.916868] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:23.293 23:31:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:23.293 23:31:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:23.293 23:31:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:23.293 23:31:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:23.293 23:31:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:23.293 23:31:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:23.293 23:31:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:23.293 23:31:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:23.293 23:31:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:23.293 23:31:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:23.293 23:31:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:13:23.293 23:31:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:23.293 23:31:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:23.293 [2024-09-30 23:31:02.972773] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:13:23.293 [2024-09-30 23:31:02.972831] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline 00:13:23.293 23:31:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:23.293 23:31:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:23.293 23:31:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:23.293 23:31:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:23.293 23:31:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:23.293 23:31:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:23.293 23:31:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:13:23.293 23:31:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:23.293 23:31:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:13:23.293 23:31:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:13:23.293 23:31:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:13:23.293 23:31:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:13:23.293 23:31:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:23.293 23:31:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:13:23.293 23:31:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:23.293 23:31:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:23.293 BaseBdev2 00:13:23.293 23:31:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:23.293 23:31:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:13:23.293 23:31:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:13:23.293 23:31:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:23.293 23:31:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:13:23.293 23:31:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:23.293 23:31:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:23.293 23:31:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:13:23.293 23:31:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:23.293 23:31:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:23.293 23:31:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:23.293 23:31:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:23.293 23:31:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:23.293 23:31:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:23.293 [ 00:13:23.293 { 00:13:23.293 "name": "BaseBdev2", 00:13:23.293 "aliases": [ 00:13:23.293 "4526b739-438a-4a11-b758-2884dede69ba" 00:13:23.293 ], 00:13:23.293 "product_name": "Malloc disk", 00:13:23.293 "block_size": 512, 00:13:23.293 "num_blocks": 65536, 00:13:23.293 "uuid": "4526b739-438a-4a11-b758-2884dede69ba", 00:13:23.293 "assigned_rate_limits": { 00:13:23.293 "rw_ios_per_sec": 0, 00:13:23.293 "rw_mbytes_per_sec": 0, 00:13:23.293 "r_mbytes_per_sec": 0, 00:13:23.293 "w_mbytes_per_sec": 0 00:13:23.293 }, 00:13:23.293 "claimed": false, 00:13:23.293 "zoned": false, 00:13:23.293 "supported_io_types": { 00:13:23.293 "read": true, 00:13:23.293 "write": true, 00:13:23.293 "unmap": true, 00:13:23.293 "flush": true, 00:13:23.293 "reset": true, 00:13:23.293 "nvme_admin": false, 00:13:23.293 "nvme_io": false, 00:13:23.293 "nvme_io_md": false, 00:13:23.293 "write_zeroes": true, 00:13:23.293 "zcopy": true, 00:13:23.293 "get_zone_info": false, 00:13:23.293 "zone_management": false, 00:13:23.293 "zone_append": false, 00:13:23.293 "compare": false, 00:13:23.293 "compare_and_write": false, 00:13:23.293 "abort": true, 00:13:23.293 "seek_hole": false, 00:13:23.293 "seek_data": false, 00:13:23.293 "copy": true, 00:13:23.293 "nvme_iov_md": false 00:13:23.293 }, 00:13:23.293 "memory_domains": [ 00:13:23.293 { 00:13:23.293 "dma_device_id": "system", 00:13:23.293 "dma_device_type": 1 00:13:23.293 }, 00:13:23.293 { 00:13:23.293 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:23.293 "dma_device_type": 2 00:13:23.293 } 00:13:23.293 ], 00:13:23.293 "driver_specific": {} 00:13:23.293 } 00:13:23.293 ] 00:13:23.293 23:31:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:23.293 23:31:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:13:23.293 23:31:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:13:23.293 23:31:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:23.293 23:31:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:13:23.293 23:31:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:23.293 23:31:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:23.293 BaseBdev3 00:13:23.293 23:31:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:23.293 23:31:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:13:23.293 23:31:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:13:23.293 23:31:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:23.293 23:31:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:13:23.293 23:31:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:23.293 23:31:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:23.293 23:31:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:13:23.293 23:31:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:23.293 23:31:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:23.293 23:31:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:23.293 23:31:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:13:23.293 23:31:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:23.293 23:31:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:23.293 [ 00:13:23.293 { 00:13:23.293 "name": "BaseBdev3", 00:13:23.293 "aliases": [ 00:13:23.294 "72d9bbde-f73e-4cb0-b66d-03c22a55451b" 00:13:23.294 ], 00:13:23.294 "product_name": "Malloc disk", 00:13:23.294 "block_size": 512, 00:13:23.294 "num_blocks": 65536, 00:13:23.294 "uuid": "72d9bbde-f73e-4cb0-b66d-03c22a55451b", 00:13:23.294 "assigned_rate_limits": { 00:13:23.294 "rw_ios_per_sec": 0, 00:13:23.294 "rw_mbytes_per_sec": 0, 00:13:23.294 "r_mbytes_per_sec": 0, 00:13:23.294 "w_mbytes_per_sec": 0 00:13:23.294 }, 00:13:23.294 "claimed": false, 00:13:23.294 "zoned": false, 00:13:23.294 "supported_io_types": { 00:13:23.294 "read": true, 00:13:23.294 "write": true, 00:13:23.294 "unmap": true, 00:13:23.294 "flush": true, 00:13:23.294 "reset": true, 00:13:23.294 "nvme_admin": false, 00:13:23.294 "nvme_io": false, 00:13:23.294 "nvme_io_md": false, 00:13:23.294 "write_zeroes": true, 00:13:23.294 "zcopy": true, 00:13:23.294 "get_zone_info": false, 00:13:23.294 "zone_management": false, 00:13:23.294 "zone_append": false, 00:13:23.294 "compare": false, 00:13:23.294 "compare_and_write": false, 00:13:23.294 "abort": true, 00:13:23.294 "seek_hole": false, 00:13:23.294 "seek_data": false, 00:13:23.294 "copy": true, 00:13:23.294 "nvme_iov_md": false 00:13:23.294 }, 00:13:23.294 "memory_domains": [ 00:13:23.294 { 00:13:23.294 "dma_device_id": "system", 00:13:23.294 "dma_device_type": 1 00:13:23.294 }, 00:13:23.294 { 00:13:23.294 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:23.294 "dma_device_type": 2 00:13:23.294 } 00:13:23.294 ], 00:13:23.294 "driver_specific": {} 00:13:23.294 } 00:13:23.294 ] 00:13:23.294 23:31:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:23.294 23:31:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:13:23.294 23:31:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:13:23.294 23:31:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:23.294 23:31:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:13:23.294 23:31:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:23.294 23:31:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:23.294 [2024-09-30 23:31:03.144112] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:23.294 [2024-09-30 23:31:03.144163] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:23.294 [2024-09-30 23:31:03.144202] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:23.553 [2024-09-30 23:31:03.146082] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:23.553 23:31:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:23.554 23:31:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:13:23.554 23:31:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:23.554 23:31:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:23.554 23:31:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:23.554 23:31:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:23.554 23:31:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:23.554 23:31:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:23.554 23:31:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:23.554 23:31:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:23.554 23:31:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:23.554 23:31:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:23.554 23:31:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:23.554 23:31:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:23.554 23:31:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:23.554 23:31:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:23.554 23:31:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:23.554 "name": "Existed_Raid", 00:13:23.554 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:23.554 "strip_size_kb": 64, 00:13:23.554 "state": "configuring", 00:13:23.554 "raid_level": "raid5f", 00:13:23.554 "superblock": false, 00:13:23.554 "num_base_bdevs": 3, 00:13:23.554 "num_base_bdevs_discovered": 2, 00:13:23.554 "num_base_bdevs_operational": 3, 00:13:23.554 "base_bdevs_list": [ 00:13:23.554 { 00:13:23.554 "name": "BaseBdev1", 00:13:23.554 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:23.554 "is_configured": false, 00:13:23.554 "data_offset": 0, 00:13:23.554 "data_size": 0 00:13:23.554 }, 00:13:23.554 { 00:13:23.554 "name": "BaseBdev2", 00:13:23.554 "uuid": "4526b739-438a-4a11-b758-2884dede69ba", 00:13:23.554 "is_configured": true, 00:13:23.554 "data_offset": 0, 00:13:23.554 "data_size": 65536 00:13:23.554 }, 00:13:23.554 { 00:13:23.554 "name": "BaseBdev3", 00:13:23.554 "uuid": "72d9bbde-f73e-4cb0-b66d-03c22a55451b", 00:13:23.554 "is_configured": true, 00:13:23.554 "data_offset": 0, 00:13:23.554 "data_size": 65536 00:13:23.554 } 00:13:23.554 ] 00:13:23.554 }' 00:13:23.554 23:31:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:23.554 23:31:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:23.814 23:31:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:13:23.814 23:31:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:23.814 23:31:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:23.814 [2024-09-30 23:31:03.559397] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:23.814 23:31:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:23.814 23:31:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:13:23.814 23:31:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:23.814 23:31:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:23.814 23:31:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:23.814 23:31:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:23.814 23:31:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:23.814 23:31:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:23.814 23:31:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:23.814 23:31:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:23.814 23:31:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:23.814 23:31:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:23.814 23:31:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:23.814 23:31:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:23.814 23:31:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:23.814 23:31:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:23.814 23:31:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:23.814 "name": "Existed_Raid", 00:13:23.814 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:23.814 "strip_size_kb": 64, 00:13:23.814 "state": "configuring", 00:13:23.814 "raid_level": "raid5f", 00:13:23.814 "superblock": false, 00:13:23.814 "num_base_bdevs": 3, 00:13:23.814 "num_base_bdevs_discovered": 1, 00:13:23.814 "num_base_bdevs_operational": 3, 00:13:23.814 "base_bdevs_list": [ 00:13:23.814 { 00:13:23.814 "name": "BaseBdev1", 00:13:23.814 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:23.814 "is_configured": false, 00:13:23.814 "data_offset": 0, 00:13:23.814 "data_size": 0 00:13:23.814 }, 00:13:23.814 { 00:13:23.814 "name": null, 00:13:23.814 "uuid": "4526b739-438a-4a11-b758-2884dede69ba", 00:13:23.814 "is_configured": false, 00:13:23.814 "data_offset": 0, 00:13:23.814 "data_size": 65536 00:13:23.814 }, 00:13:23.814 { 00:13:23.814 "name": "BaseBdev3", 00:13:23.814 "uuid": "72d9bbde-f73e-4cb0-b66d-03c22a55451b", 00:13:23.814 "is_configured": true, 00:13:23.814 "data_offset": 0, 00:13:23.814 "data_size": 65536 00:13:23.814 } 00:13:23.814 ] 00:13:23.814 }' 00:13:23.814 23:31:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:23.814 23:31:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:24.384 23:31:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:24.384 23:31:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.384 23:31:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:13:24.384 23:31:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:24.384 23:31:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.384 23:31:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:13:24.384 23:31:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:13:24.384 23:31:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.384 23:31:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:24.384 [2024-09-30 23:31:04.081843] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:24.384 BaseBdev1 00:13:24.384 23:31:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.384 23:31:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:13:24.384 23:31:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:13:24.384 23:31:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:24.384 23:31:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:13:24.384 23:31:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:24.384 23:31:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:24.384 23:31:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:13:24.384 23:31:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.384 23:31:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:24.384 23:31:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.384 23:31:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:24.384 23:31:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.384 23:31:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:24.384 [ 00:13:24.384 { 00:13:24.384 "name": "BaseBdev1", 00:13:24.384 "aliases": [ 00:13:24.384 "a6836fe1-f151-4410-a631-6da320d95455" 00:13:24.384 ], 00:13:24.384 "product_name": "Malloc disk", 00:13:24.384 "block_size": 512, 00:13:24.384 "num_blocks": 65536, 00:13:24.384 "uuid": "a6836fe1-f151-4410-a631-6da320d95455", 00:13:24.384 "assigned_rate_limits": { 00:13:24.384 "rw_ios_per_sec": 0, 00:13:24.384 "rw_mbytes_per_sec": 0, 00:13:24.384 "r_mbytes_per_sec": 0, 00:13:24.384 "w_mbytes_per_sec": 0 00:13:24.384 }, 00:13:24.384 "claimed": true, 00:13:24.384 "claim_type": "exclusive_write", 00:13:24.384 "zoned": false, 00:13:24.384 "supported_io_types": { 00:13:24.384 "read": true, 00:13:24.384 "write": true, 00:13:24.384 "unmap": true, 00:13:24.384 "flush": true, 00:13:24.384 "reset": true, 00:13:24.384 "nvme_admin": false, 00:13:24.384 "nvme_io": false, 00:13:24.384 "nvme_io_md": false, 00:13:24.384 "write_zeroes": true, 00:13:24.384 "zcopy": true, 00:13:24.384 "get_zone_info": false, 00:13:24.384 "zone_management": false, 00:13:24.384 "zone_append": false, 00:13:24.384 "compare": false, 00:13:24.384 "compare_and_write": false, 00:13:24.384 "abort": true, 00:13:24.384 "seek_hole": false, 00:13:24.384 "seek_data": false, 00:13:24.384 "copy": true, 00:13:24.384 "nvme_iov_md": false 00:13:24.384 }, 00:13:24.384 "memory_domains": [ 00:13:24.384 { 00:13:24.384 "dma_device_id": "system", 00:13:24.384 "dma_device_type": 1 00:13:24.384 }, 00:13:24.384 { 00:13:24.384 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:24.384 "dma_device_type": 2 00:13:24.384 } 00:13:24.384 ], 00:13:24.384 "driver_specific": {} 00:13:24.384 } 00:13:24.384 ] 00:13:24.384 23:31:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.384 23:31:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:13:24.384 23:31:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:13:24.384 23:31:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:24.384 23:31:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:24.384 23:31:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:24.384 23:31:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:24.384 23:31:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:24.384 23:31:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:24.384 23:31:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:24.384 23:31:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:24.384 23:31:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:24.384 23:31:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:24.384 23:31:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.384 23:31:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:24.384 23:31:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:24.384 23:31:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.384 23:31:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:24.384 "name": "Existed_Raid", 00:13:24.384 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:24.384 "strip_size_kb": 64, 00:13:24.384 "state": "configuring", 00:13:24.385 "raid_level": "raid5f", 00:13:24.385 "superblock": false, 00:13:24.385 "num_base_bdevs": 3, 00:13:24.385 "num_base_bdevs_discovered": 2, 00:13:24.385 "num_base_bdevs_operational": 3, 00:13:24.385 "base_bdevs_list": [ 00:13:24.385 { 00:13:24.385 "name": "BaseBdev1", 00:13:24.385 "uuid": "a6836fe1-f151-4410-a631-6da320d95455", 00:13:24.385 "is_configured": true, 00:13:24.385 "data_offset": 0, 00:13:24.385 "data_size": 65536 00:13:24.385 }, 00:13:24.385 { 00:13:24.385 "name": null, 00:13:24.385 "uuid": "4526b739-438a-4a11-b758-2884dede69ba", 00:13:24.385 "is_configured": false, 00:13:24.385 "data_offset": 0, 00:13:24.385 "data_size": 65536 00:13:24.385 }, 00:13:24.385 { 00:13:24.385 "name": "BaseBdev3", 00:13:24.385 "uuid": "72d9bbde-f73e-4cb0-b66d-03c22a55451b", 00:13:24.385 "is_configured": true, 00:13:24.385 "data_offset": 0, 00:13:24.385 "data_size": 65536 00:13:24.385 } 00:13:24.385 ] 00:13:24.385 }' 00:13:24.385 23:31:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:24.385 23:31:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:24.954 23:31:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:24.954 23:31:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.954 23:31:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:13:24.954 23:31:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:24.954 23:31:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.954 23:31:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:13:24.954 23:31:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:13:24.954 23:31:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.954 23:31:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:24.954 [2024-09-30 23:31:04.581020] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:13:24.954 23:31:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.954 23:31:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:13:24.954 23:31:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:24.954 23:31:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:24.954 23:31:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:24.954 23:31:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:24.954 23:31:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:24.954 23:31:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:24.954 23:31:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:24.954 23:31:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:24.954 23:31:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:24.954 23:31:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:24.954 23:31:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.954 23:31:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:24.954 23:31:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:24.954 23:31:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.954 23:31:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:24.954 "name": "Existed_Raid", 00:13:24.954 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:24.954 "strip_size_kb": 64, 00:13:24.954 "state": "configuring", 00:13:24.954 "raid_level": "raid5f", 00:13:24.954 "superblock": false, 00:13:24.954 "num_base_bdevs": 3, 00:13:24.954 "num_base_bdevs_discovered": 1, 00:13:24.954 "num_base_bdevs_operational": 3, 00:13:24.954 "base_bdevs_list": [ 00:13:24.954 { 00:13:24.954 "name": "BaseBdev1", 00:13:24.954 "uuid": "a6836fe1-f151-4410-a631-6da320d95455", 00:13:24.954 "is_configured": true, 00:13:24.954 "data_offset": 0, 00:13:24.954 "data_size": 65536 00:13:24.954 }, 00:13:24.954 { 00:13:24.954 "name": null, 00:13:24.954 "uuid": "4526b739-438a-4a11-b758-2884dede69ba", 00:13:24.954 "is_configured": false, 00:13:24.954 "data_offset": 0, 00:13:24.954 "data_size": 65536 00:13:24.954 }, 00:13:24.954 { 00:13:24.954 "name": null, 00:13:24.954 "uuid": "72d9bbde-f73e-4cb0-b66d-03c22a55451b", 00:13:24.954 "is_configured": false, 00:13:24.954 "data_offset": 0, 00:13:24.954 "data_size": 65536 00:13:24.954 } 00:13:24.954 ] 00:13:24.954 }' 00:13:24.954 23:31:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:24.954 23:31:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:25.213 23:31:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:25.213 23:31:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:25.213 23:31:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:25.213 23:31:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:13:25.213 23:31:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:25.213 23:31:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:13:25.213 23:31:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:13:25.213 23:31:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:25.213 23:31:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:25.213 [2024-09-30 23:31:05.016314] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:25.213 23:31:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:25.213 23:31:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:13:25.213 23:31:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:25.213 23:31:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:25.213 23:31:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:25.213 23:31:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:25.213 23:31:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:25.213 23:31:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:25.213 23:31:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:25.213 23:31:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:25.213 23:31:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:25.213 23:31:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:25.213 23:31:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:25.213 23:31:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:25.213 23:31:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:25.213 23:31:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:25.213 23:31:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:25.213 "name": "Existed_Raid", 00:13:25.213 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:25.213 "strip_size_kb": 64, 00:13:25.213 "state": "configuring", 00:13:25.213 "raid_level": "raid5f", 00:13:25.213 "superblock": false, 00:13:25.213 "num_base_bdevs": 3, 00:13:25.213 "num_base_bdevs_discovered": 2, 00:13:25.213 "num_base_bdevs_operational": 3, 00:13:25.213 "base_bdevs_list": [ 00:13:25.213 { 00:13:25.213 "name": "BaseBdev1", 00:13:25.213 "uuid": "a6836fe1-f151-4410-a631-6da320d95455", 00:13:25.213 "is_configured": true, 00:13:25.213 "data_offset": 0, 00:13:25.213 "data_size": 65536 00:13:25.213 }, 00:13:25.213 { 00:13:25.213 "name": null, 00:13:25.214 "uuid": "4526b739-438a-4a11-b758-2884dede69ba", 00:13:25.214 "is_configured": false, 00:13:25.214 "data_offset": 0, 00:13:25.214 "data_size": 65536 00:13:25.214 }, 00:13:25.214 { 00:13:25.214 "name": "BaseBdev3", 00:13:25.214 "uuid": "72d9bbde-f73e-4cb0-b66d-03c22a55451b", 00:13:25.214 "is_configured": true, 00:13:25.214 "data_offset": 0, 00:13:25.214 "data_size": 65536 00:13:25.214 } 00:13:25.214 ] 00:13:25.214 }' 00:13:25.214 23:31:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:25.214 23:31:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:25.783 23:31:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:13:25.783 23:31:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:25.783 23:31:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:25.783 23:31:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:25.783 23:31:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:25.783 23:31:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:13:25.783 23:31:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:13:25.783 23:31:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:25.783 23:31:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:25.783 [2024-09-30 23:31:05.475534] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:25.783 23:31:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:25.783 23:31:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:13:25.783 23:31:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:25.783 23:31:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:25.783 23:31:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:25.783 23:31:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:25.783 23:31:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:25.783 23:31:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:25.783 23:31:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:25.783 23:31:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:25.783 23:31:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:25.783 23:31:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:25.783 23:31:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:25.783 23:31:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:25.783 23:31:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:25.783 23:31:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:25.783 23:31:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:25.783 "name": "Existed_Raid", 00:13:25.783 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:25.783 "strip_size_kb": 64, 00:13:25.783 "state": "configuring", 00:13:25.783 "raid_level": "raid5f", 00:13:25.783 "superblock": false, 00:13:25.783 "num_base_bdevs": 3, 00:13:25.783 "num_base_bdevs_discovered": 1, 00:13:25.783 "num_base_bdevs_operational": 3, 00:13:25.783 "base_bdevs_list": [ 00:13:25.783 { 00:13:25.783 "name": null, 00:13:25.783 "uuid": "a6836fe1-f151-4410-a631-6da320d95455", 00:13:25.783 "is_configured": false, 00:13:25.783 "data_offset": 0, 00:13:25.783 "data_size": 65536 00:13:25.783 }, 00:13:25.783 { 00:13:25.783 "name": null, 00:13:25.783 "uuid": "4526b739-438a-4a11-b758-2884dede69ba", 00:13:25.783 "is_configured": false, 00:13:25.783 "data_offset": 0, 00:13:25.783 "data_size": 65536 00:13:25.783 }, 00:13:25.783 { 00:13:25.783 "name": "BaseBdev3", 00:13:25.783 "uuid": "72d9bbde-f73e-4cb0-b66d-03c22a55451b", 00:13:25.783 "is_configured": true, 00:13:25.783 "data_offset": 0, 00:13:25.783 "data_size": 65536 00:13:25.783 } 00:13:25.783 ] 00:13:25.783 }' 00:13:25.783 23:31:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:25.783 23:31:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:26.353 23:31:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:26.353 23:31:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.353 23:31:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:26.353 23:31:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:13:26.353 23:31:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.353 23:31:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:13:26.353 23:31:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:13:26.353 23:31:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.353 23:31:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:26.353 [2024-09-30 23:31:05.973408] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:26.353 23:31:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.353 23:31:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:13:26.353 23:31:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:26.353 23:31:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:26.353 23:31:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:26.353 23:31:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:26.353 23:31:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:26.353 23:31:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:26.353 23:31:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:26.353 23:31:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:26.353 23:31:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:26.353 23:31:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:26.353 23:31:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.353 23:31:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:26.353 23:31:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:26.353 23:31:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.353 23:31:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:26.353 "name": "Existed_Raid", 00:13:26.353 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:26.353 "strip_size_kb": 64, 00:13:26.353 "state": "configuring", 00:13:26.353 "raid_level": "raid5f", 00:13:26.353 "superblock": false, 00:13:26.353 "num_base_bdevs": 3, 00:13:26.353 "num_base_bdevs_discovered": 2, 00:13:26.353 "num_base_bdevs_operational": 3, 00:13:26.353 "base_bdevs_list": [ 00:13:26.353 { 00:13:26.353 "name": null, 00:13:26.353 "uuid": "a6836fe1-f151-4410-a631-6da320d95455", 00:13:26.353 "is_configured": false, 00:13:26.353 "data_offset": 0, 00:13:26.353 "data_size": 65536 00:13:26.353 }, 00:13:26.353 { 00:13:26.353 "name": "BaseBdev2", 00:13:26.353 "uuid": "4526b739-438a-4a11-b758-2884dede69ba", 00:13:26.353 "is_configured": true, 00:13:26.353 "data_offset": 0, 00:13:26.353 "data_size": 65536 00:13:26.353 }, 00:13:26.353 { 00:13:26.353 "name": "BaseBdev3", 00:13:26.353 "uuid": "72d9bbde-f73e-4cb0-b66d-03c22a55451b", 00:13:26.353 "is_configured": true, 00:13:26.353 "data_offset": 0, 00:13:26.353 "data_size": 65536 00:13:26.353 } 00:13:26.353 ] 00:13:26.353 }' 00:13:26.353 23:31:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:26.353 23:31:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:26.613 23:31:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:26.613 23:31:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.613 23:31:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:13:26.613 23:31:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:26.613 23:31:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.613 23:31:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:13:26.613 23:31:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:26.613 23:31:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.613 23:31:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:26.613 23:31:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:13:26.873 23:31:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.873 23:31:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u a6836fe1-f151-4410-a631-6da320d95455 00:13:26.873 23:31:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.873 23:31:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:26.873 [2024-09-30 23:31:06.515641] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:13:26.873 [2024-09-30 23:31:06.515694] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:13:26.873 [2024-09-30 23:31:06.515705] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:13:26.873 [2024-09-30 23:31:06.516006] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:13:26.873 [2024-09-30 23:31:06.516456] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:13:26.873 [2024-09-30 23:31:06.516479] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006d00 00:13:26.873 [2024-09-30 23:31:06.516678] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:26.873 NewBaseBdev 00:13:26.873 23:31:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.873 23:31:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:13:26.873 23:31:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:13:26.873 23:31:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:26.873 23:31:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:13:26.873 23:31:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:26.873 23:31:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:26.873 23:31:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:13:26.873 23:31:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.873 23:31:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:26.873 23:31:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.873 23:31:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:13:26.873 23:31:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.873 23:31:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:26.873 [ 00:13:26.873 { 00:13:26.873 "name": "NewBaseBdev", 00:13:26.873 "aliases": [ 00:13:26.873 "a6836fe1-f151-4410-a631-6da320d95455" 00:13:26.873 ], 00:13:26.873 "product_name": "Malloc disk", 00:13:26.873 "block_size": 512, 00:13:26.873 "num_blocks": 65536, 00:13:26.873 "uuid": "a6836fe1-f151-4410-a631-6da320d95455", 00:13:26.873 "assigned_rate_limits": { 00:13:26.873 "rw_ios_per_sec": 0, 00:13:26.873 "rw_mbytes_per_sec": 0, 00:13:26.873 "r_mbytes_per_sec": 0, 00:13:26.873 "w_mbytes_per_sec": 0 00:13:26.873 }, 00:13:26.873 "claimed": true, 00:13:26.873 "claim_type": "exclusive_write", 00:13:26.873 "zoned": false, 00:13:26.873 "supported_io_types": { 00:13:26.873 "read": true, 00:13:26.873 "write": true, 00:13:26.873 "unmap": true, 00:13:26.873 "flush": true, 00:13:26.873 "reset": true, 00:13:26.873 "nvme_admin": false, 00:13:26.873 "nvme_io": false, 00:13:26.873 "nvme_io_md": false, 00:13:26.873 "write_zeroes": true, 00:13:26.873 "zcopy": true, 00:13:26.873 "get_zone_info": false, 00:13:26.873 "zone_management": false, 00:13:26.873 "zone_append": false, 00:13:26.873 "compare": false, 00:13:26.873 "compare_and_write": false, 00:13:26.873 "abort": true, 00:13:26.873 "seek_hole": false, 00:13:26.873 "seek_data": false, 00:13:26.873 "copy": true, 00:13:26.873 "nvme_iov_md": false 00:13:26.873 }, 00:13:26.873 "memory_domains": [ 00:13:26.873 { 00:13:26.873 "dma_device_id": "system", 00:13:26.873 "dma_device_type": 1 00:13:26.873 }, 00:13:26.873 { 00:13:26.873 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:26.873 "dma_device_type": 2 00:13:26.873 } 00:13:26.873 ], 00:13:26.873 "driver_specific": {} 00:13:26.873 } 00:13:26.873 ] 00:13:26.873 23:31:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.873 23:31:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:13:26.873 23:31:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:13:26.873 23:31:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:26.873 23:31:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:26.873 23:31:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:26.873 23:31:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:26.874 23:31:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:26.874 23:31:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:26.874 23:31:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:26.874 23:31:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:26.874 23:31:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:26.874 23:31:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:26.874 23:31:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:26.874 23:31:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.874 23:31:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:26.874 23:31:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.874 23:31:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:26.874 "name": "Existed_Raid", 00:13:26.874 "uuid": "d66cd8c2-2c82-4593-96b5-d29b0980a26a", 00:13:26.874 "strip_size_kb": 64, 00:13:26.874 "state": "online", 00:13:26.874 "raid_level": "raid5f", 00:13:26.874 "superblock": false, 00:13:26.874 "num_base_bdevs": 3, 00:13:26.874 "num_base_bdevs_discovered": 3, 00:13:26.874 "num_base_bdevs_operational": 3, 00:13:26.874 "base_bdevs_list": [ 00:13:26.874 { 00:13:26.874 "name": "NewBaseBdev", 00:13:26.874 "uuid": "a6836fe1-f151-4410-a631-6da320d95455", 00:13:26.874 "is_configured": true, 00:13:26.874 "data_offset": 0, 00:13:26.874 "data_size": 65536 00:13:26.874 }, 00:13:26.874 { 00:13:26.874 "name": "BaseBdev2", 00:13:26.874 "uuid": "4526b739-438a-4a11-b758-2884dede69ba", 00:13:26.874 "is_configured": true, 00:13:26.874 "data_offset": 0, 00:13:26.874 "data_size": 65536 00:13:26.874 }, 00:13:26.874 { 00:13:26.874 "name": "BaseBdev3", 00:13:26.874 "uuid": "72d9bbde-f73e-4cb0-b66d-03c22a55451b", 00:13:26.874 "is_configured": true, 00:13:26.874 "data_offset": 0, 00:13:26.874 "data_size": 65536 00:13:26.874 } 00:13:26.874 ] 00:13:26.874 }' 00:13:26.874 23:31:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:26.874 23:31:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:27.134 23:31:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:13:27.134 23:31:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:13:27.134 23:31:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:27.134 23:31:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:27.134 23:31:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:13:27.134 23:31:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:27.134 23:31:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:13:27.134 23:31:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:27.134 23:31:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.134 23:31:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:27.134 [2024-09-30 23:31:06.951376] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:27.134 23:31:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.394 23:31:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:27.394 "name": "Existed_Raid", 00:13:27.394 "aliases": [ 00:13:27.394 "d66cd8c2-2c82-4593-96b5-d29b0980a26a" 00:13:27.394 ], 00:13:27.394 "product_name": "Raid Volume", 00:13:27.394 "block_size": 512, 00:13:27.394 "num_blocks": 131072, 00:13:27.394 "uuid": "d66cd8c2-2c82-4593-96b5-d29b0980a26a", 00:13:27.394 "assigned_rate_limits": { 00:13:27.394 "rw_ios_per_sec": 0, 00:13:27.394 "rw_mbytes_per_sec": 0, 00:13:27.394 "r_mbytes_per_sec": 0, 00:13:27.394 "w_mbytes_per_sec": 0 00:13:27.394 }, 00:13:27.394 "claimed": false, 00:13:27.394 "zoned": false, 00:13:27.394 "supported_io_types": { 00:13:27.394 "read": true, 00:13:27.394 "write": true, 00:13:27.394 "unmap": false, 00:13:27.394 "flush": false, 00:13:27.394 "reset": true, 00:13:27.394 "nvme_admin": false, 00:13:27.394 "nvme_io": false, 00:13:27.394 "nvme_io_md": false, 00:13:27.394 "write_zeroes": true, 00:13:27.394 "zcopy": false, 00:13:27.394 "get_zone_info": false, 00:13:27.394 "zone_management": false, 00:13:27.394 "zone_append": false, 00:13:27.394 "compare": false, 00:13:27.394 "compare_and_write": false, 00:13:27.394 "abort": false, 00:13:27.394 "seek_hole": false, 00:13:27.394 "seek_data": false, 00:13:27.394 "copy": false, 00:13:27.394 "nvme_iov_md": false 00:13:27.394 }, 00:13:27.394 "driver_specific": { 00:13:27.394 "raid": { 00:13:27.394 "uuid": "d66cd8c2-2c82-4593-96b5-d29b0980a26a", 00:13:27.394 "strip_size_kb": 64, 00:13:27.394 "state": "online", 00:13:27.394 "raid_level": "raid5f", 00:13:27.394 "superblock": false, 00:13:27.394 "num_base_bdevs": 3, 00:13:27.394 "num_base_bdevs_discovered": 3, 00:13:27.394 "num_base_bdevs_operational": 3, 00:13:27.394 "base_bdevs_list": [ 00:13:27.394 { 00:13:27.394 "name": "NewBaseBdev", 00:13:27.394 "uuid": "a6836fe1-f151-4410-a631-6da320d95455", 00:13:27.394 "is_configured": true, 00:13:27.394 "data_offset": 0, 00:13:27.394 "data_size": 65536 00:13:27.394 }, 00:13:27.394 { 00:13:27.394 "name": "BaseBdev2", 00:13:27.394 "uuid": "4526b739-438a-4a11-b758-2884dede69ba", 00:13:27.394 "is_configured": true, 00:13:27.394 "data_offset": 0, 00:13:27.394 "data_size": 65536 00:13:27.394 }, 00:13:27.394 { 00:13:27.394 "name": "BaseBdev3", 00:13:27.394 "uuid": "72d9bbde-f73e-4cb0-b66d-03c22a55451b", 00:13:27.394 "is_configured": true, 00:13:27.394 "data_offset": 0, 00:13:27.394 "data_size": 65536 00:13:27.394 } 00:13:27.394 ] 00:13:27.394 } 00:13:27.394 } 00:13:27.394 }' 00:13:27.394 23:31:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:27.394 23:31:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:13:27.394 BaseBdev2 00:13:27.394 BaseBdev3' 00:13:27.394 23:31:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:27.394 23:31:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:27.394 23:31:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:27.394 23:31:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:27.394 23:31:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:13:27.394 23:31:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.394 23:31:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:27.394 23:31:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.394 23:31:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:27.394 23:31:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:27.394 23:31:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:27.394 23:31:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:27.394 23:31:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:13:27.394 23:31:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.394 23:31:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:27.394 23:31:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.394 23:31:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:27.394 23:31:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:27.394 23:31:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:27.394 23:31:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:13:27.394 23:31:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:27.394 23:31:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.394 23:31:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:27.394 23:31:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.394 23:31:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:27.394 23:31:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:27.394 23:31:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:27.394 23:31:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.394 23:31:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:27.394 [2024-09-30 23:31:07.186810] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:27.394 [2024-09-30 23:31:07.186839] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:27.394 [2024-09-30 23:31:07.186915] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:27.394 [2024-09-30 23:31:07.187169] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:27.394 [2024-09-30 23:31:07.187194] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name Existed_Raid, state offline 00:13:27.394 23:31:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.394 23:31:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 90504 00:13:27.394 23:31:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 90504 ']' 00:13:27.394 23:31:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@954 -- # kill -0 90504 00:13:27.394 23:31:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@955 -- # uname 00:13:27.394 23:31:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:27.394 23:31:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 90504 00:13:27.394 23:31:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:27.394 killing process with pid 90504 00:13:27.394 23:31:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:27.394 23:31:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 90504' 00:13:27.394 23:31:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@969 -- # kill 90504 00:13:27.394 [2024-09-30 23:31:07.230710] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:27.394 23:31:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@974 -- # wait 90504 00:13:27.654 [2024-09-30 23:31:07.262550] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:27.914 23:31:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:13:27.914 00:13:27.914 real 0m8.736s 00:13:27.914 user 0m14.893s 00:13:27.914 sys 0m1.815s 00:13:27.914 23:31:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:27.914 23:31:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:27.914 ************************************ 00:13:27.914 END TEST raid5f_state_function_test 00:13:27.914 ************************************ 00:13:27.914 23:31:07 bdev_raid -- bdev/bdev_raid.sh@987 -- # run_test raid5f_state_function_test_sb raid_state_function_test raid5f 3 true 00:13:27.914 23:31:07 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:13:27.914 23:31:07 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:27.914 23:31:07 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:27.914 ************************************ 00:13:27.914 START TEST raid5f_state_function_test_sb 00:13:27.914 ************************************ 00:13:27.914 23:31:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test raid5f 3 true 00:13:27.914 23:31:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:13:27.914 23:31:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:13:27.914 23:31:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:13:27.914 23:31:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:13:27.914 23:31:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:13:27.914 23:31:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:27.914 23:31:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:13:27.914 23:31:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:27.914 23:31:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:27.914 23:31:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:13:27.914 23:31:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:27.914 23:31:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:27.914 23:31:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:13:27.914 23:31:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:27.914 23:31:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:27.914 23:31:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:13:27.914 23:31:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:13:27.914 23:31:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:13:27.914 23:31:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:13:27.914 23:31:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:13:27.914 23:31:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:13:27.914 23:31:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:13:27.914 23:31:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:13:27.914 23:31:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:13:27.914 23:31:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:13:27.914 23:31:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:13:27.914 23:31:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=91103 00:13:27.914 23:31:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:13:27.914 Process raid pid: 91103 00:13:27.914 23:31:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 91103' 00:13:27.914 23:31:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 91103 00:13:27.914 23:31:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 91103 ']' 00:13:27.914 23:31:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:27.914 23:31:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:27.914 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:27.914 23:31:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:27.915 23:31:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:27.915 23:31:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:27.915 [2024-09-30 23:31:07.695324] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:13:27.915 [2024-09-30 23:31:07.695459] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:28.174 [2024-09-30 23:31:07.856201] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:28.175 [2024-09-30 23:31:07.902335] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:13:28.175 [2024-09-30 23:31:07.946104] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:28.175 [2024-09-30 23:31:07.946157] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:28.744 23:31:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:28.744 23:31:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:13:28.744 23:31:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:13:28.744 23:31:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.744 23:31:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:28.744 [2024-09-30 23:31:08.524188] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:28.744 [2024-09-30 23:31:08.524241] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:28.744 [2024-09-30 23:31:08.524257] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:28.744 [2024-09-30 23:31:08.524269] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:28.744 [2024-09-30 23:31:08.524277] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:28.744 [2024-09-30 23:31:08.524292] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:28.744 23:31:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.744 23:31:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:13:28.744 23:31:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:28.745 23:31:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:28.745 23:31:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:28.745 23:31:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:28.745 23:31:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:28.745 23:31:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:28.745 23:31:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:28.745 23:31:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:28.745 23:31:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:28.745 23:31:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:28.745 23:31:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.745 23:31:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:28.745 23:31:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:28.745 23:31:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.745 23:31:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:28.745 "name": "Existed_Raid", 00:13:28.745 "uuid": "f3315f99-afba-45ff-9dc5-47856d3f5aed", 00:13:28.745 "strip_size_kb": 64, 00:13:28.745 "state": "configuring", 00:13:28.745 "raid_level": "raid5f", 00:13:28.745 "superblock": true, 00:13:28.745 "num_base_bdevs": 3, 00:13:28.745 "num_base_bdevs_discovered": 0, 00:13:28.745 "num_base_bdevs_operational": 3, 00:13:28.745 "base_bdevs_list": [ 00:13:28.745 { 00:13:28.745 "name": "BaseBdev1", 00:13:28.745 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:28.745 "is_configured": false, 00:13:28.745 "data_offset": 0, 00:13:28.745 "data_size": 0 00:13:28.745 }, 00:13:28.745 { 00:13:28.745 "name": "BaseBdev2", 00:13:28.745 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:28.745 "is_configured": false, 00:13:28.745 "data_offset": 0, 00:13:28.745 "data_size": 0 00:13:28.745 }, 00:13:28.745 { 00:13:28.745 "name": "BaseBdev3", 00:13:28.745 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:28.745 "is_configured": false, 00:13:28.745 "data_offset": 0, 00:13:28.745 "data_size": 0 00:13:28.745 } 00:13:28.745 ] 00:13:28.745 }' 00:13:28.745 23:31:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:28.745 23:31:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:29.314 23:31:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:29.314 23:31:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.314 23:31:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:29.314 [2024-09-30 23:31:08.895452] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:29.314 [2024-09-30 23:31:08.895498] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring 00:13:29.314 23:31:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.314 23:31:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:13:29.314 23:31:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.314 23:31:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:29.314 [2024-09-30 23:31:08.907475] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:29.314 [2024-09-30 23:31:08.907517] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:29.314 [2024-09-30 23:31:08.907527] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:29.314 [2024-09-30 23:31:08.907539] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:29.314 [2024-09-30 23:31:08.907546] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:29.314 [2024-09-30 23:31:08.907557] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:29.314 23:31:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.314 23:31:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:13:29.314 23:31:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.314 23:31:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:29.314 [2024-09-30 23:31:08.928612] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:29.314 BaseBdev1 00:13:29.314 23:31:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.314 23:31:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:13:29.314 23:31:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:13:29.314 23:31:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:29.314 23:31:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:13:29.314 23:31:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:29.314 23:31:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:29.314 23:31:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:13:29.314 23:31:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.315 23:31:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:29.315 23:31:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.315 23:31:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:29.315 23:31:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.315 23:31:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:29.315 [ 00:13:29.315 { 00:13:29.315 "name": "BaseBdev1", 00:13:29.315 "aliases": [ 00:13:29.315 "48048971-a58b-4d26-ae77-43fc5021ef7e" 00:13:29.315 ], 00:13:29.315 "product_name": "Malloc disk", 00:13:29.315 "block_size": 512, 00:13:29.315 "num_blocks": 65536, 00:13:29.315 "uuid": "48048971-a58b-4d26-ae77-43fc5021ef7e", 00:13:29.315 "assigned_rate_limits": { 00:13:29.315 "rw_ios_per_sec": 0, 00:13:29.315 "rw_mbytes_per_sec": 0, 00:13:29.315 "r_mbytes_per_sec": 0, 00:13:29.315 "w_mbytes_per_sec": 0 00:13:29.315 }, 00:13:29.315 "claimed": true, 00:13:29.315 "claim_type": "exclusive_write", 00:13:29.315 "zoned": false, 00:13:29.315 "supported_io_types": { 00:13:29.315 "read": true, 00:13:29.315 "write": true, 00:13:29.315 "unmap": true, 00:13:29.315 "flush": true, 00:13:29.315 "reset": true, 00:13:29.315 "nvme_admin": false, 00:13:29.315 "nvme_io": false, 00:13:29.315 "nvme_io_md": false, 00:13:29.315 "write_zeroes": true, 00:13:29.315 "zcopy": true, 00:13:29.315 "get_zone_info": false, 00:13:29.315 "zone_management": false, 00:13:29.315 "zone_append": false, 00:13:29.315 "compare": false, 00:13:29.315 "compare_and_write": false, 00:13:29.315 "abort": true, 00:13:29.315 "seek_hole": false, 00:13:29.315 "seek_data": false, 00:13:29.315 "copy": true, 00:13:29.315 "nvme_iov_md": false 00:13:29.315 }, 00:13:29.315 "memory_domains": [ 00:13:29.315 { 00:13:29.315 "dma_device_id": "system", 00:13:29.315 "dma_device_type": 1 00:13:29.315 }, 00:13:29.315 { 00:13:29.315 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:29.315 "dma_device_type": 2 00:13:29.315 } 00:13:29.315 ], 00:13:29.315 "driver_specific": {} 00:13:29.315 } 00:13:29.315 ] 00:13:29.315 23:31:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.315 23:31:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:13:29.315 23:31:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:13:29.315 23:31:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:29.315 23:31:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:29.315 23:31:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:29.315 23:31:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:29.315 23:31:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:29.315 23:31:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:29.315 23:31:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:29.315 23:31:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:29.315 23:31:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:29.315 23:31:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:29.315 23:31:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:29.315 23:31:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.315 23:31:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:29.315 23:31:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.315 23:31:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:29.315 "name": "Existed_Raid", 00:13:29.315 "uuid": "707ce99a-f4db-44f3-b492-44209c5200a6", 00:13:29.315 "strip_size_kb": 64, 00:13:29.315 "state": "configuring", 00:13:29.315 "raid_level": "raid5f", 00:13:29.315 "superblock": true, 00:13:29.315 "num_base_bdevs": 3, 00:13:29.315 "num_base_bdevs_discovered": 1, 00:13:29.315 "num_base_bdevs_operational": 3, 00:13:29.315 "base_bdevs_list": [ 00:13:29.315 { 00:13:29.315 "name": "BaseBdev1", 00:13:29.315 "uuid": "48048971-a58b-4d26-ae77-43fc5021ef7e", 00:13:29.315 "is_configured": true, 00:13:29.315 "data_offset": 2048, 00:13:29.315 "data_size": 63488 00:13:29.315 }, 00:13:29.315 { 00:13:29.315 "name": "BaseBdev2", 00:13:29.315 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:29.315 "is_configured": false, 00:13:29.315 "data_offset": 0, 00:13:29.315 "data_size": 0 00:13:29.315 }, 00:13:29.315 { 00:13:29.315 "name": "BaseBdev3", 00:13:29.315 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:29.315 "is_configured": false, 00:13:29.315 "data_offset": 0, 00:13:29.315 "data_size": 0 00:13:29.315 } 00:13:29.315 ] 00:13:29.315 }' 00:13:29.315 23:31:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:29.315 23:31:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:29.575 23:31:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:29.575 23:31:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.575 23:31:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:29.575 [2024-09-30 23:31:09.383873] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:29.575 [2024-09-30 23:31:09.383919] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring 00:13:29.575 23:31:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.575 23:31:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:13:29.575 23:31:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.575 23:31:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:29.575 [2024-09-30 23:31:09.391902] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:29.575 [2024-09-30 23:31:09.393754] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:29.575 [2024-09-30 23:31:09.393798] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:29.575 [2024-09-30 23:31:09.393810] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:29.575 [2024-09-30 23:31:09.393823] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:29.575 23:31:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.575 23:31:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:13:29.575 23:31:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:29.575 23:31:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:13:29.575 23:31:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:29.575 23:31:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:29.575 23:31:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:29.575 23:31:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:29.575 23:31:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:29.575 23:31:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:29.575 23:31:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:29.575 23:31:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:29.575 23:31:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:29.575 23:31:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:29.575 23:31:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.575 23:31:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:29.575 23:31:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:29.575 23:31:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.835 23:31:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:29.835 "name": "Existed_Raid", 00:13:29.835 "uuid": "3be7a004-cee0-42c5-9950-fa34259eb2d4", 00:13:29.835 "strip_size_kb": 64, 00:13:29.835 "state": "configuring", 00:13:29.835 "raid_level": "raid5f", 00:13:29.835 "superblock": true, 00:13:29.835 "num_base_bdevs": 3, 00:13:29.835 "num_base_bdevs_discovered": 1, 00:13:29.835 "num_base_bdevs_operational": 3, 00:13:29.835 "base_bdevs_list": [ 00:13:29.835 { 00:13:29.835 "name": "BaseBdev1", 00:13:29.835 "uuid": "48048971-a58b-4d26-ae77-43fc5021ef7e", 00:13:29.835 "is_configured": true, 00:13:29.835 "data_offset": 2048, 00:13:29.835 "data_size": 63488 00:13:29.835 }, 00:13:29.835 { 00:13:29.835 "name": "BaseBdev2", 00:13:29.835 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:29.835 "is_configured": false, 00:13:29.835 "data_offset": 0, 00:13:29.835 "data_size": 0 00:13:29.835 }, 00:13:29.835 { 00:13:29.835 "name": "BaseBdev3", 00:13:29.835 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:29.835 "is_configured": false, 00:13:29.835 "data_offset": 0, 00:13:29.835 "data_size": 0 00:13:29.835 } 00:13:29.835 ] 00:13:29.835 }' 00:13:29.835 23:31:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:29.835 23:31:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:30.095 23:31:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:13:30.095 23:31:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:30.095 23:31:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:30.095 [2024-09-30 23:31:09.867955] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:30.095 BaseBdev2 00:13:30.095 23:31:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:30.095 23:31:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:13:30.095 23:31:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:13:30.095 23:31:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:30.095 23:31:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:13:30.095 23:31:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:30.095 23:31:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:30.095 23:31:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:13:30.095 23:31:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:30.095 23:31:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:30.095 23:31:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:30.095 23:31:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:30.095 23:31:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:30.095 23:31:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:30.095 [ 00:13:30.095 { 00:13:30.095 "name": "BaseBdev2", 00:13:30.095 "aliases": [ 00:13:30.095 "84e988e7-434d-4494-a2dc-3d47eb1f382b" 00:13:30.095 ], 00:13:30.095 "product_name": "Malloc disk", 00:13:30.095 "block_size": 512, 00:13:30.095 "num_blocks": 65536, 00:13:30.095 "uuid": "84e988e7-434d-4494-a2dc-3d47eb1f382b", 00:13:30.095 "assigned_rate_limits": { 00:13:30.095 "rw_ios_per_sec": 0, 00:13:30.095 "rw_mbytes_per_sec": 0, 00:13:30.095 "r_mbytes_per_sec": 0, 00:13:30.095 "w_mbytes_per_sec": 0 00:13:30.096 }, 00:13:30.096 "claimed": true, 00:13:30.096 "claim_type": "exclusive_write", 00:13:30.096 "zoned": false, 00:13:30.096 "supported_io_types": { 00:13:30.096 "read": true, 00:13:30.096 "write": true, 00:13:30.096 "unmap": true, 00:13:30.096 "flush": true, 00:13:30.096 "reset": true, 00:13:30.096 "nvme_admin": false, 00:13:30.096 "nvme_io": false, 00:13:30.096 "nvme_io_md": false, 00:13:30.096 "write_zeroes": true, 00:13:30.096 "zcopy": true, 00:13:30.096 "get_zone_info": false, 00:13:30.096 "zone_management": false, 00:13:30.096 "zone_append": false, 00:13:30.096 "compare": false, 00:13:30.096 "compare_and_write": false, 00:13:30.096 "abort": true, 00:13:30.096 "seek_hole": false, 00:13:30.096 "seek_data": false, 00:13:30.096 "copy": true, 00:13:30.096 "nvme_iov_md": false 00:13:30.096 }, 00:13:30.096 "memory_domains": [ 00:13:30.096 { 00:13:30.096 "dma_device_id": "system", 00:13:30.096 "dma_device_type": 1 00:13:30.096 }, 00:13:30.096 { 00:13:30.096 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:30.096 "dma_device_type": 2 00:13:30.096 } 00:13:30.096 ], 00:13:30.096 "driver_specific": {} 00:13:30.096 } 00:13:30.096 ] 00:13:30.096 23:31:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:30.096 23:31:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:13:30.096 23:31:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:30.096 23:31:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:30.096 23:31:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:13:30.096 23:31:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:30.096 23:31:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:30.096 23:31:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:30.096 23:31:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:30.096 23:31:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:30.096 23:31:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:30.096 23:31:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:30.096 23:31:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:30.096 23:31:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:30.096 23:31:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:30.096 23:31:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:30.096 23:31:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:30.096 23:31:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:30.096 23:31:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:30.356 23:31:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:30.356 "name": "Existed_Raid", 00:13:30.356 "uuid": "3be7a004-cee0-42c5-9950-fa34259eb2d4", 00:13:30.356 "strip_size_kb": 64, 00:13:30.356 "state": "configuring", 00:13:30.356 "raid_level": "raid5f", 00:13:30.356 "superblock": true, 00:13:30.356 "num_base_bdevs": 3, 00:13:30.356 "num_base_bdevs_discovered": 2, 00:13:30.356 "num_base_bdevs_operational": 3, 00:13:30.356 "base_bdevs_list": [ 00:13:30.356 { 00:13:30.356 "name": "BaseBdev1", 00:13:30.356 "uuid": "48048971-a58b-4d26-ae77-43fc5021ef7e", 00:13:30.356 "is_configured": true, 00:13:30.356 "data_offset": 2048, 00:13:30.356 "data_size": 63488 00:13:30.356 }, 00:13:30.356 { 00:13:30.356 "name": "BaseBdev2", 00:13:30.356 "uuid": "84e988e7-434d-4494-a2dc-3d47eb1f382b", 00:13:30.356 "is_configured": true, 00:13:30.356 "data_offset": 2048, 00:13:30.356 "data_size": 63488 00:13:30.356 }, 00:13:30.356 { 00:13:30.356 "name": "BaseBdev3", 00:13:30.356 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:30.356 "is_configured": false, 00:13:30.356 "data_offset": 0, 00:13:30.356 "data_size": 0 00:13:30.356 } 00:13:30.356 ] 00:13:30.356 }' 00:13:30.356 23:31:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:30.356 23:31:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:30.617 23:31:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:13:30.617 23:31:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:30.617 23:31:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:30.617 [2024-09-30 23:31:10.354510] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:30.617 [2024-09-30 23:31:10.354729] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:13:30.617 [2024-09-30 23:31:10.354761] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:13:30.617 BaseBdev3 00:13:30.617 [2024-09-30 23:31:10.355097] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:13:30.617 [2024-09-30 23:31:10.355553] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:13:30.617 [2024-09-30 23:31:10.355583] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980 00:13:30.617 23:31:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:30.617 [2024-09-30 23:31:10.355717] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:30.617 23:31:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:13:30.617 23:31:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:13:30.617 23:31:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:30.617 23:31:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:13:30.617 23:31:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:30.617 23:31:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:30.617 23:31:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:13:30.617 23:31:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:30.617 23:31:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:30.617 23:31:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:30.617 23:31:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:13:30.617 23:31:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:30.617 23:31:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:30.617 [ 00:13:30.617 { 00:13:30.617 "name": "BaseBdev3", 00:13:30.617 "aliases": [ 00:13:30.617 "2ac4984b-ea5a-4092-b228-91da65e53708" 00:13:30.617 ], 00:13:30.617 "product_name": "Malloc disk", 00:13:30.617 "block_size": 512, 00:13:30.617 "num_blocks": 65536, 00:13:30.617 "uuid": "2ac4984b-ea5a-4092-b228-91da65e53708", 00:13:30.617 "assigned_rate_limits": { 00:13:30.617 "rw_ios_per_sec": 0, 00:13:30.617 "rw_mbytes_per_sec": 0, 00:13:30.617 "r_mbytes_per_sec": 0, 00:13:30.617 "w_mbytes_per_sec": 0 00:13:30.617 }, 00:13:30.617 "claimed": true, 00:13:30.617 "claim_type": "exclusive_write", 00:13:30.617 "zoned": false, 00:13:30.617 "supported_io_types": { 00:13:30.617 "read": true, 00:13:30.617 "write": true, 00:13:30.617 "unmap": true, 00:13:30.617 "flush": true, 00:13:30.617 "reset": true, 00:13:30.617 "nvme_admin": false, 00:13:30.617 "nvme_io": false, 00:13:30.617 "nvme_io_md": false, 00:13:30.617 "write_zeroes": true, 00:13:30.617 "zcopy": true, 00:13:30.617 "get_zone_info": false, 00:13:30.617 "zone_management": false, 00:13:30.617 "zone_append": false, 00:13:30.617 "compare": false, 00:13:30.617 "compare_and_write": false, 00:13:30.617 "abort": true, 00:13:30.617 "seek_hole": false, 00:13:30.617 "seek_data": false, 00:13:30.617 "copy": true, 00:13:30.617 "nvme_iov_md": false 00:13:30.617 }, 00:13:30.617 "memory_domains": [ 00:13:30.617 { 00:13:30.617 "dma_device_id": "system", 00:13:30.617 "dma_device_type": 1 00:13:30.617 }, 00:13:30.617 { 00:13:30.617 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:30.617 "dma_device_type": 2 00:13:30.617 } 00:13:30.617 ], 00:13:30.617 "driver_specific": {} 00:13:30.617 } 00:13:30.617 ] 00:13:30.617 23:31:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:30.617 23:31:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:13:30.617 23:31:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:30.617 23:31:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:30.617 23:31:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:13:30.617 23:31:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:30.617 23:31:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:30.617 23:31:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:30.617 23:31:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:30.617 23:31:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:30.617 23:31:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:30.617 23:31:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:30.617 23:31:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:30.617 23:31:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:30.617 23:31:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:30.617 23:31:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:30.617 23:31:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:30.617 23:31:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:30.617 23:31:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:30.617 23:31:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:30.617 "name": "Existed_Raid", 00:13:30.617 "uuid": "3be7a004-cee0-42c5-9950-fa34259eb2d4", 00:13:30.617 "strip_size_kb": 64, 00:13:30.617 "state": "online", 00:13:30.617 "raid_level": "raid5f", 00:13:30.617 "superblock": true, 00:13:30.617 "num_base_bdevs": 3, 00:13:30.617 "num_base_bdevs_discovered": 3, 00:13:30.617 "num_base_bdevs_operational": 3, 00:13:30.617 "base_bdevs_list": [ 00:13:30.617 { 00:13:30.617 "name": "BaseBdev1", 00:13:30.617 "uuid": "48048971-a58b-4d26-ae77-43fc5021ef7e", 00:13:30.617 "is_configured": true, 00:13:30.617 "data_offset": 2048, 00:13:30.617 "data_size": 63488 00:13:30.617 }, 00:13:30.617 { 00:13:30.617 "name": "BaseBdev2", 00:13:30.617 "uuid": "84e988e7-434d-4494-a2dc-3d47eb1f382b", 00:13:30.617 "is_configured": true, 00:13:30.617 "data_offset": 2048, 00:13:30.617 "data_size": 63488 00:13:30.617 }, 00:13:30.617 { 00:13:30.617 "name": "BaseBdev3", 00:13:30.617 "uuid": "2ac4984b-ea5a-4092-b228-91da65e53708", 00:13:30.617 "is_configured": true, 00:13:30.617 "data_offset": 2048, 00:13:30.617 "data_size": 63488 00:13:30.617 } 00:13:30.617 ] 00:13:30.617 }' 00:13:30.617 23:31:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:30.617 23:31:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:31.187 23:31:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:13:31.187 23:31:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:13:31.187 23:31:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:31.187 23:31:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:31.187 23:31:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:13:31.187 23:31:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:31.187 23:31:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:13:31.187 23:31:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:31.187 23:31:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:31.187 23:31:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:31.187 [2024-09-30 23:31:10.857896] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:31.187 23:31:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:31.187 23:31:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:31.187 "name": "Existed_Raid", 00:13:31.187 "aliases": [ 00:13:31.187 "3be7a004-cee0-42c5-9950-fa34259eb2d4" 00:13:31.187 ], 00:13:31.187 "product_name": "Raid Volume", 00:13:31.187 "block_size": 512, 00:13:31.187 "num_blocks": 126976, 00:13:31.187 "uuid": "3be7a004-cee0-42c5-9950-fa34259eb2d4", 00:13:31.187 "assigned_rate_limits": { 00:13:31.187 "rw_ios_per_sec": 0, 00:13:31.187 "rw_mbytes_per_sec": 0, 00:13:31.187 "r_mbytes_per_sec": 0, 00:13:31.187 "w_mbytes_per_sec": 0 00:13:31.187 }, 00:13:31.187 "claimed": false, 00:13:31.187 "zoned": false, 00:13:31.187 "supported_io_types": { 00:13:31.187 "read": true, 00:13:31.187 "write": true, 00:13:31.187 "unmap": false, 00:13:31.187 "flush": false, 00:13:31.187 "reset": true, 00:13:31.187 "nvme_admin": false, 00:13:31.187 "nvme_io": false, 00:13:31.187 "nvme_io_md": false, 00:13:31.187 "write_zeroes": true, 00:13:31.187 "zcopy": false, 00:13:31.187 "get_zone_info": false, 00:13:31.187 "zone_management": false, 00:13:31.187 "zone_append": false, 00:13:31.187 "compare": false, 00:13:31.187 "compare_and_write": false, 00:13:31.187 "abort": false, 00:13:31.187 "seek_hole": false, 00:13:31.187 "seek_data": false, 00:13:31.187 "copy": false, 00:13:31.187 "nvme_iov_md": false 00:13:31.187 }, 00:13:31.187 "driver_specific": { 00:13:31.187 "raid": { 00:13:31.187 "uuid": "3be7a004-cee0-42c5-9950-fa34259eb2d4", 00:13:31.187 "strip_size_kb": 64, 00:13:31.187 "state": "online", 00:13:31.187 "raid_level": "raid5f", 00:13:31.187 "superblock": true, 00:13:31.187 "num_base_bdevs": 3, 00:13:31.187 "num_base_bdevs_discovered": 3, 00:13:31.187 "num_base_bdevs_operational": 3, 00:13:31.187 "base_bdevs_list": [ 00:13:31.187 { 00:13:31.187 "name": "BaseBdev1", 00:13:31.187 "uuid": "48048971-a58b-4d26-ae77-43fc5021ef7e", 00:13:31.187 "is_configured": true, 00:13:31.187 "data_offset": 2048, 00:13:31.187 "data_size": 63488 00:13:31.187 }, 00:13:31.187 { 00:13:31.187 "name": "BaseBdev2", 00:13:31.187 "uuid": "84e988e7-434d-4494-a2dc-3d47eb1f382b", 00:13:31.187 "is_configured": true, 00:13:31.187 "data_offset": 2048, 00:13:31.187 "data_size": 63488 00:13:31.187 }, 00:13:31.187 { 00:13:31.187 "name": "BaseBdev3", 00:13:31.187 "uuid": "2ac4984b-ea5a-4092-b228-91da65e53708", 00:13:31.187 "is_configured": true, 00:13:31.188 "data_offset": 2048, 00:13:31.188 "data_size": 63488 00:13:31.188 } 00:13:31.188 ] 00:13:31.188 } 00:13:31.188 } 00:13:31.188 }' 00:13:31.188 23:31:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:31.188 23:31:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:13:31.188 BaseBdev2 00:13:31.188 BaseBdev3' 00:13:31.188 23:31:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:31.188 23:31:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:31.188 23:31:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:31.188 23:31:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:13:31.188 23:31:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:31.188 23:31:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:31.188 23:31:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:31.188 23:31:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:31.188 23:31:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:31.188 23:31:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:31.188 23:31:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:31.188 23:31:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:13:31.188 23:31:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:31.188 23:31:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:31.188 23:31:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:31.188 23:31:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:31.447 23:31:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:31.447 23:31:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:31.447 23:31:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:31.447 23:31:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:13:31.447 23:31:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:31.447 23:31:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:31.447 23:31:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:31.447 23:31:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:31.447 23:31:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:31.448 23:31:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:31.448 23:31:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:13:31.448 23:31:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:31.448 23:31:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:31.448 [2024-09-30 23:31:11.117287] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:31.448 23:31:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:31.448 23:31:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:13:31.448 23:31:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:13:31.448 23:31:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:31.448 23:31:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:13:31.448 23:31:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:13:31.448 23:31:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 2 00:13:31.448 23:31:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:31.448 23:31:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:31.448 23:31:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:31.448 23:31:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:31.448 23:31:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:31.448 23:31:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:31.448 23:31:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:31.448 23:31:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:31.448 23:31:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:31.448 23:31:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:31.448 23:31:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:31.448 23:31:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:31.448 23:31:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:31.448 23:31:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:31.448 23:31:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:31.448 "name": "Existed_Raid", 00:13:31.448 "uuid": "3be7a004-cee0-42c5-9950-fa34259eb2d4", 00:13:31.448 "strip_size_kb": 64, 00:13:31.448 "state": "online", 00:13:31.448 "raid_level": "raid5f", 00:13:31.448 "superblock": true, 00:13:31.448 "num_base_bdevs": 3, 00:13:31.448 "num_base_bdevs_discovered": 2, 00:13:31.448 "num_base_bdevs_operational": 2, 00:13:31.448 "base_bdevs_list": [ 00:13:31.448 { 00:13:31.448 "name": null, 00:13:31.448 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:31.448 "is_configured": false, 00:13:31.448 "data_offset": 0, 00:13:31.448 "data_size": 63488 00:13:31.448 }, 00:13:31.448 { 00:13:31.448 "name": "BaseBdev2", 00:13:31.448 "uuid": "84e988e7-434d-4494-a2dc-3d47eb1f382b", 00:13:31.448 "is_configured": true, 00:13:31.448 "data_offset": 2048, 00:13:31.448 "data_size": 63488 00:13:31.448 }, 00:13:31.448 { 00:13:31.448 "name": "BaseBdev3", 00:13:31.448 "uuid": "2ac4984b-ea5a-4092-b228-91da65e53708", 00:13:31.448 "is_configured": true, 00:13:31.448 "data_offset": 2048, 00:13:31.448 "data_size": 63488 00:13:31.448 } 00:13:31.448 ] 00:13:31.448 }' 00:13:31.448 23:31:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:31.448 23:31:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:31.708 23:31:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:13:31.708 23:31:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:31.708 23:31:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:31.708 23:31:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:31.708 23:31:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:31.708 23:31:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:31.708 23:31:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:31.708 23:31:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:31.708 23:31:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:31.708 23:31:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:13:31.708 23:31:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:31.708 23:31:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:31.708 [2024-09-30 23:31:11.556012] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:31.708 [2024-09-30 23:31:11.556160] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:31.968 [2024-09-30 23:31:11.567582] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:31.968 23:31:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:31.968 23:31:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:31.968 23:31:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:31.968 23:31:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:31.968 23:31:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:31.968 23:31:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:31.968 23:31:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:31.968 23:31:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:31.968 23:31:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:31.968 23:31:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:31.968 23:31:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:13:31.968 23:31:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:31.968 23:31:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:31.968 [2024-09-30 23:31:11.611548] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:13:31.968 [2024-09-30 23:31:11.611600] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline 00:13:31.968 23:31:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:31.968 23:31:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:31.968 23:31:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:31.968 23:31:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:31.968 23:31:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:31.968 23:31:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:31.968 23:31:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:13:31.968 23:31:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:31.968 23:31:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:13:31.968 23:31:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:13:31.968 23:31:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:13:31.968 23:31:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:13:31.968 23:31:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:31.968 23:31:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:13:31.968 23:31:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:31.968 23:31:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:31.968 BaseBdev2 00:13:31.968 23:31:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:31.968 23:31:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:13:31.969 23:31:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:13:31.969 23:31:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:31.969 23:31:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:13:31.969 23:31:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:31.969 23:31:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:31.969 23:31:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:13:31.969 23:31:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:31.969 23:31:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:31.969 23:31:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:31.969 23:31:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:31.969 23:31:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:31.969 23:31:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:31.969 [ 00:13:31.969 { 00:13:31.969 "name": "BaseBdev2", 00:13:31.969 "aliases": [ 00:13:31.969 "d137d8c5-371f-4979-b88e-33d5160ad46f" 00:13:31.969 ], 00:13:31.969 "product_name": "Malloc disk", 00:13:31.969 "block_size": 512, 00:13:31.969 "num_blocks": 65536, 00:13:31.969 "uuid": "d137d8c5-371f-4979-b88e-33d5160ad46f", 00:13:31.969 "assigned_rate_limits": { 00:13:31.969 "rw_ios_per_sec": 0, 00:13:31.969 "rw_mbytes_per_sec": 0, 00:13:31.969 "r_mbytes_per_sec": 0, 00:13:31.969 "w_mbytes_per_sec": 0 00:13:31.969 }, 00:13:31.969 "claimed": false, 00:13:31.969 "zoned": false, 00:13:31.969 "supported_io_types": { 00:13:31.969 "read": true, 00:13:31.969 "write": true, 00:13:31.969 "unmap": true, 00:13:31.969 "flush": true, 00:13:31.969 "reset": true, 00:13:31.969 "nvme_admin": false, 00:13:31.969 "nvme_io": false, 00:13:31.969 "nvme_io_md": false, 00:13:31.969 "write_zeroes": true, 00:13:31.969 "zcopy": true, 00:13:31.969 "get_zone_info": false, 00:13:31.969 "zone_management": false, 00:13:31.969 "zone_append": false, 00:13:31.969 "compare": false, 00:13:31.969 "compare_and_write": false, 00:13:31.969 "abort": true, 00:13:31.969 "seek_hole": false, 00:13:31.969 "seek_data": false, 00:13:31.969 "copy": true, 00:13:31.969 "nvme_iov_md": false 00:13:31.969 }, 00:13:31.969 "memory_domains": [ 00:13:31.969 { 00:13:31.969 "dma_device_id": "system", 00:13:31.969 "dma_device_type": 1 00:13:31.969 }, 00:13:31.969 { 00:13:31.969 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:31.969 "dma_device_type": 2 00:13:31.969 } 00:13:31.969 ], 00:13:31.969 "driver_specific": {} 00:13:31.969 } 00:13:31.969 ] 00:13:31.969 23:31:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:31.969 23:31:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:13:31.969 23:31:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:13:31.969 23:31:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:31.969 23:31:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:13:31.969 23:31:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:31.969 23:31:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:31.969 BaseBdev3 00:13:31.969 23:31:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:31.969 23:31:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:13:31.969 23:31:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:13:31.969 23:31:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:31.969 23:31:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:13:31.969 23:31:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:31.969 23:31:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:31.969 23:31:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:13:31.969 23:31:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:31.969 23:31:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:31.969 23:31:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:31.969 23:31:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:13:31.969 23:31:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:31.969 23:31:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:31.969 [ 00:13:31.969 { 00:13:31.969 "name": "BaseBdev3", 00:13:31.969 "aliases": [ 00:13:31.969 "37967f48-daf5-4050-b160-e63810d10de7" 00:13:31.969 ], 00:13:31.969 "product_name": "Malloc disk", 00:13:31.969 "block_size": 512, 00:13:31.969 "num_blocks": 65536, 00:13:31.969 "uuid": "37967f48-daf5-4050-b160-e63810d10de7", 00:13:31.969 "assigned_rate_limits": { 00:13:31.969 "rw_ios_per_sec": 0, 00:13:31.969 "rw_mbytes_per_sec": 0, 00:13:31.969 "r_mbytes_per_sec": 0, 00:13:31.969 "w_mbytes_per_sec": 0 00:13:31.969 }, 00:13:31.969 "claimed": false, 00:13:31.969 "zoned": false, 00:13:31.969 "supported_io_types": { 00:13:31.969 "read": true, 00:13:31.969 "write": true, 00:13:31.969 "unmap": true, 00:13:31.969 "flush": true, 00:13:31.969 "reset": true, 00:13:31.969 "nvme_admin": false, 00:13:31.969 "nvme_io": false, 00:13:31.969 "nvme_io_md": false, 00:13:31.969 "write_zeroes": true, 00:13:31.969 "zcopy": true, 00:13:31.969 "get_zone_info": false, 00:13:31.969 "zone_management": false, 00:13:31.969 "zone_append": false, 00:13:31.969 "compare": false, 00:13:31.969 "compare_and_write": false, 00:13:31.969 "abort": true, 00:13:31.969 "seek_hole": false, 00:13:31.969 "seek_data": false, 00:13:31.969 "copy": true, 00:13:31.969 "nvme_iov_md": false 00:13:31.969 }, 00:13:31.969 "memory_domains": [ 00:13:31.969 { 00:13:31.969 "dma_device_id": "system", 00:13:31.969 "dma_device_type": 1 00:13:31.969 }, 00:13:31.969 { 00:13:31.969 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:31.969 "dma_device_type": 2 00:13:31.969 } 00:13:31.969 ], 00:13:31.969 "driver_specific": {} 00:13:31.969 } 00:13:31.969 ] 00:13:31.969 23:31:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:31.969 23:31:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:13:31.969 23:31:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:13:31.969 23:31:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:31.969 23:31:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:13:31.969 23:31:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:31.969 23:31:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:31.969 [2024-09-30 23:31:11.763555] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:31.969 [2024-09-30 23:31:11.763601] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:31.969 [2024-09-30 23:31:11.763623] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:31.969 [2024-09-30 23:31:11.765490] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:31.969 23:31:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:31.969 23:31:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:13:31.969 23:31:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:31.969 23:31:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:31.969 23:31:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:31.969 23:31:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:31.969 23:31:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:31.969 23:31:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:31.969 23:31:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:31.969 23:31:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:31.969 23:31:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:31.969 23:31:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:31.969 23:31:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:31.969 23:31:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:31.969 23:31:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:31.969 23:31:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:32.229 23:31:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:32.229 "name": "Existed_Raid", 00:13:32.229 "uuid": "359600b2-b1ec-40b2-8106-5542d1b1ce1a", 00:13:32.229 "strip_size_kb": 64, 00:13:32.229 "state": "configuring", 00:13:32.229 "raid_level": "raid5f", 00:13:32.229 "superblock": true, 00:13:32.229 "num_base_bdevs": 3, 00:13:32.229 "num_base_bdevs_discovered": 2, 00:13:32.229 "num_base_bdevs_operational": 3, 00:13:32.229 "base_bdevs_list": [ 00:13:32.229 { 00:13:32.229 "name": "BaseBdev1", 00:13:32.229 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:32.229 "is_configured": false, 00:13:32.229 "data_offset": 0, 00:13:32.229 "data_size": 0 00:13:32.229 }, 00:13:32.229 { 00:13:32.229 "name": "BaseBdev2", 00:13:32.229 "uuid": "d137d8c5-371f-4979-b88e-33d5160ad46f", 00:13:32.229 "is_configured": true, 00:13:32.229 "data_offset": 2048, 00:13:32.229 "data_size": 63488 00:13:32.229 }, 00:13:32.229 { 00:13:32.229 "name": "BaseBdev3", 00:13:32.229 "uuid": "37967f48-daf5-4050-b160-e63810d10de7", 00:13:32.229 "is_configured": true, 00:13:32.229 "data_offset": 2048, 00:13:32.229 "data_size": 63488 00:13:32.229 } 00:13:32.229 ] 00:13:32.229 }' 00:13:32.229 23:31:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:32.229 23:31:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:32.488 23:31:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:13:32.488 23:31:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:32.488 23:31:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:32.489 [2024-09-30 23:31:12.226820] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:32.489 23:31:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:32.489 23:31:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:13:32.489 23:31:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:32.489 23:31:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:32.489 23:31:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:32.489 23:31:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:32.489 23:31:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:32.489 23:31:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:32.489 23:31:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:32.489 23:31:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:32.489 23:31:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:32.489 23:31:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:32.489 23:31:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:32.489 23:31:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:32.489 23:31:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:32.489 23:31:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:32.489 23:31:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:32.489 "name": "Existed_Raid", 00:13:32.489 "uuid": "359600b2-b1ec-40b2-8106-5542d1b1ce1a", 00:13:32.489 "strip_size_kb": 64, 00:13:32.489 "state": "configuring", 00:13:32.489 "raid_level": "raid5f", 00:13:32.489 "superblock": true, 00:13:32.489 "num_base_bdevs": 3, 00:13:32.489 "num_base_bdevs_discovered": 1, 00:13:32.489 "num_base_bdevs_operational": 3, 00:13:32.489 "base_bdevs_list": [ 00:13:32.489 { 00:13:32.489 "name": "BaseBdev1", 00:13:32.489 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:32.489 "is_configured": false, 00:13:32.489 "data_offset": 0, 00:13:32.489 "data_size": 0 00:13:32.489 }, 00:13:32.489 { 00:13:32.489 "name": null, 00:13:32.489 "uuid": "d137d8c5-371f-4979-b88e-33d5160ad46f", 00:13:32.489 "is_configured": false, 00:13:32.489 "data_offset": 0, 00:13:32.489 "data_size": 63488 00:13:32.489 }, 00:13:32.489 { 00:13:32.489 "name": "BaseBdev3", 00:13:32.489 "uuid": "37967f48-daf5-4050-b160-e63810d10de7", 00:13:32.489 "is_configured": true, 00:13:32.489 "data_offset": 2048, 00:13:32.489 "data_size": 63488 00:13:32.489 } 00:13:32.489 ] 00:13:32.489 }' 00:13:32.489 23:31:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:32.489 23:31:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:33.057 23:31:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:33.057 23:31:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:33.057 23:31:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:13:33.057 23:31:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:33.057 23:31:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:33.057 23:31:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:13:33.057 23:31:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:13:33.057 23:31:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:33.057 23:31:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:33.057 [2024-09-30 23:31:12.725277] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:33.057 BaseBdev1 00:13:33.057 23:31:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:33.057 23:31:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:13:33.057 23:31:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:13:33.057 23:31:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:33.057 23:31:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:13:33.057 23:31:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:33.057 23:31:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:33.057 23:31:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:13:33.057 23:31:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:33.057 23:31:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:33.057 23:31:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:33.057 23:31:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:33.057 23:31:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:33.057 23:31:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:33.057 [ 00:13:33.057 { 00:13:33.057 "name": "BaseBdev1", 00:13:33.057 "aliases": [ 00:13:33.057 "56bb48e5-45d8-4233-8af4-1115128d5d09" 00:13:33.057 ], 00:13:33.057 "product_name": "Malloc disk", 00:13:33.057 "block_size": 512, 00:13:33.057 "num_blocks": 65536, 00:13:33.057 "uuid": "56bb48e5-45d8-4233-8af4-1115128d5d09", 00:13:33.057 "assigned_rate_limits": { 00:13:33.057 "rw_ios_per_sec": 0, 00:13:33.057 "rw_mbytes_per_sec": 0, 00:13:33.057 "r_mbytes_per_sec": 0, 00:13:33.057 "w_mbytes_per_sec": 0 00:13:33.057 }, 00:13:33.057 "claimed": true, 00:13:33.057 "claim_type": "exclusive_write", 00:13:33.057 "zoned": false, 00:13:33.057 "supported_io_types": { 00:13:33.057 "read": true, 00:13:33.057 "write": true, 00:13:33.057 "unmap": true, 00:13:33.057 "flush": true, 00:13:33.057 "reset": true, 00:13:33.057 "nvme_admin": false, 00:13:33.057 "nvme_io": false, 00:13:33.057 "nvme_io_md": false, 00:13:33.057 "write_zeroes": true, 00:13:33.057 "zcopy": true, 00:13:33.057 "get_zone_info": false, 00:13:33.057 "zone_management": false, 00:13:33.057 "zone_append": false, 00:13:33.057 "compare": false, 00:13:33.057 "compare_and_write": false, 00:13:33.057 "abort": true, 00:13:33.057 "seek_hole": false, 00:13:33.057 "seek_data": false, 00:13:33.057 "copy": true, 00:13:33.057 "nvme_iov_md": false 00:13:33.057 }, 00:13:33.057 "memory_domains": [ 00:13:33.057 { 00:13:33.057 "dma_device_id": "system", 00:13:33.057 "dma_device_type": 1 00:13:33.057 }, 00:13:33.057 { 00:13:33.057 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:33.057 "dma_device_type": 2 00:13:33.057 } 00:13:33.057 ], 00:13:33.057 "driver_specific": {} 00:13:33.057 } 00:13:33.057 ] 00:13:33.057 23:31:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:33.057 23:31:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:13:33.057 23:31:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:13:33.057 23:31:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:33.057 23:31:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:33.057 23:31:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:33.057 23:31:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:33.057 23:31:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:33.057 23:31:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:33.057 23:31:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:33.057 23:31:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:33.057 23:31:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:33.057 23:31:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:33.057 23:31:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:33.057 23:31:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:33.057 23:31:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:33.057 23:31:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:33.057 23:31:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:33.057 "name": "Existed_Raid", 00:13:33.057 "uuid": "359600b2-b1ec-40b2-8106-5542d1b1ce1a", 00:13:33.057 "strip_size_kb": 64, 00:13:33.057 "state": "configuring", 00:13:33.057 "raid_level": "raid5f", 00:13:33.057 "superblock": true, 00:13:33.057 "num_base_bdevs": 3, 00:13:33.057 "num_base_bdevs_discovered": 2, 00:13:33.057 "num_base_bdevs_operational": 3, 00:13:33.057 "base_bdevs_list": [ 00:13:33.057 { 00:13:33.057 "name": "BaseBdev1", 00:13:33.057 "uuid": "56bb48e5-45d8-4233-8af4-1115128d5d09", 00:13:33.058 "is_configured": true, 00:13:33.058 "data_offset": 2048, 00:13:33.058 "data_size": 63488 00:13:33.058 }, 00:13:33.058 { 00:13:33.058 "name": null, 00:13:33.058 "uuid": "d137d8c5-371f-4979-b88e-33d5160ad46f", 00:13:33.058 "is_configured": false, 00:13:33.058 "data_offset": 0, 00:13:33.058 "data_size": 63488 00:13:33.058 }, 00:13:33.058 { 00:13:33.058 "name": "BaseBdev3", 00:13:33.058 "uuid": "37967f48-daf5-4050-b160-e63810d10de7", 00:13:33.058 "is_configured": true, 00:13:33.058 "data_offset": 2048, 00:13:33.058 "data_size": 63488 00:13:33.058 } 00:13:33.058 ] 00:13:33.058 }' 00:13:33.058 23:31:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:33.058 23:31:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:33.628 23:31:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:33.628 23:31:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:33.628 23:31:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:13:33.628 23:31:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:33.628 23:31:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:33.628 23:31:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:13:33.628 23:31:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:13:33.628 23:31:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:33.628 23:31:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:33.628 [2024-09-30 23:31:13.260390] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:13:33.628 23:31:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:33.628 23:31:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:13:33.628 23:31:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:33.628 23:31:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:33.628 23:31:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:33.628 23:31:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:33.628 23:31:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:33.628 23:31:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:33.628 23:31:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:33.628 23:31:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:33.628 23:31:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:33.628 23:31:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:33.628 23:31:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:33.628 23:31:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:33.628 23:31:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:33.628 23:31:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:33.628 23:31:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:33.628 "name": "Existed_Raid", 00:13:33.628 "uuid": "359600b2-b1ec-40b2-8106-5542d1b1ce1a", 00:13:33.628 "strip_size_kb": 64, 00:13:33.628 "state": "configuring", 00:13:33.628 "raid_level": "raid5f", 00:13:33.628 "superblock": true, 00:13:33.628 "num_base_bdevs": 3, 00:13:33.628 "num_base_bdevs_discovered": 1, 00:13:33.628 "num_base_bdevs_operational": 3, 00:13:33.628 "base_bdevs_list": [ 00:13:33.628 { 00:13:33.628 "name": "BaseBdev1", 00:13:33.628 "uuid": "56bb48e5-45d8-4233-8af4-1115128d5d09", 00:13:33.628 "is_configured": true, 00:13:33.628 "data_offset": 2048, 00:13:33.628 "data_size": 63488 00:13:33.628 }, 00:13:33.628 { 00:13:33.628 "name": null, 00:13:33.628 "uuid": "d137d8c5-371f-4979-b88e-33d5160ad46f", 00:13:33.628 "is_configured": false, 00:13:33.628 "data_offset": 0, 00:13:33.628 "data_size": 63488 00:13:33.628 }, 00:13:33.628 { 00:13:33.628 "name": null, 00:13:33.628 "uuid": "37967f48-daf5-4050-b160-e63810d10de7", 00:13:33.628 "is_configured": false, 00:13:33.628 "data_offset": 0, 00:13:33.628 "data_size": 63488 00:13:33.628 } 00:13:33.628 ] 00:13:33.628 }' 00:13:33.628 23:31:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:33.628 23:31:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:33.888 23:31:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:33.888 23:31:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:33.888 23:31:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:13:33.888 23:31:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:33.888 23:31:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:33.888 23:31:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:13:33.888 23:31:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:13:33.888 23:31:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:33.888 23:31:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:34.148 [2024-09-30 23:31:13.743601] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:34.148 23:31:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.148 23:31:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:13:34.148 23:31:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:34.148 23:31:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:34.148 23:31:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:34.148 23:31:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:34.148 23:31:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:34.148 23:31:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:34.148 23:31:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:34.148 23:31:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:34.148 23:31:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:34.148 23:31:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:34.148 23:31:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:34.148 23:31:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.148 23:31:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:34.148 23:31:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.148 23:31:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:34.148 "name": "Existed_Raid", 00:13:34.148 "uuid": "359600b2-b1ec-40b2-8106-5542d1b1ce1a", 00:13:34.148 "strip_size_kb": 64, 00:13:34.148 "state": "configuring", 00:13:34.148 "raid_level": "raid5f", 00:13:34.148 "superblock": true, 00:13:34.148 "num_base_bdevs": 3, 00:13:34.148 "num_base_bdevs_discovered": 2, 00:13:34.148 "num_base_bdevs_operational": 3, 00:13:34.148 "base_bdevs_list": [ 00:13:34.148 { 00:13:34.149 "name": "BaseBdev1", 00:13:34.149 "uuid": "56bb48e5-45d8-4233-8af4-1115128d5d09", 00:13:34.149 "is_configured": true, 00:13:34.149 "data_offset": 2048, 00:13:34.149 "data_size": 63488 00:13:34.149 }, 00:13:34.149 { 00:13:34.149 "name": null, 00:13:34.149 "uuid": "d137d8c5-371f-4979-b88e-33d5160ad46f", 00:13:34.149 "is_configured": false, 00:13:34.149 "data_offset": 0, 00:13:34.149 "data_size": 63488 00:13:34.149 }, 00:13:34.149 { 00:13:34.149 "name": "BaseBdev3", 00:13:34.149 "uuid": "37967f48-daf5-4050-b160-e63810d10de7", 00:13:34.149 "is_configured": true, 00:13:34.149 "data_offset": 2048, 00:13:34.149 "data_size": 63488 00:13:34.149 } 00:13:34.149 ] 00:13:34.149 }' 00:13:34.149 23:31:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:34.149 23:31:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:34.408 23:31:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:13:34.408 23:31:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:34.408 23:31:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.408 23:31:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:34.408 23:31:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.408 23:31:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:13:34.408 23:31:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:13:34.408 23:31:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.408 23:31:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:34.408 [2024-09-30 23:31:14.230839] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:34.408 23:31:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.408 23:31:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:13:34.408 23:31:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:34.408 23:31:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:34.408 23:31:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:34.408 23:31:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:34.408 23:31:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:34.408 23:31:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:34.408 23:31:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:34.408 23:31:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:34.408 23:31:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:34.408 23:31:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:34.408 23:31:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.408 23:31:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:34.408 23:31:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:34.668 23:31:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.668 23:31:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:34.668 "name": "Existed_Raid", 00:13:34.668 "uuid": "359600b2-b1ec-40b2-8106-5542d1b1ce1a", 00:13:34.668 "strip_size_kb": 64, 00:13:34.668 "state": "configuring", 00:13:34.668 "raid_level": "raid5f", 00:13:34.668 "superblock": true, 00:13:34.668 "num_base_bdevs": 3, 00:13:34.668 "num_base_bdevs_discovered": 1, 00:13:34.668 "num_base_bdevs_operational": 3, 00:13:34.668 "base_bdevs_list": [ 00:13:34.668 { 00:13:34.668 "name": null, 00:13:34.668 "uuid": "56bb48e5-45d8-4233-8af4-1115128d5d09", 00:13:34.668 "is_configured": false, 00:13:34.668 "data_offset": 0, 00:13:34.668 "data_size": 63488 00:13:34.668 }, 00:13:34.668 { 00:13:34.668 "name": null, 00:13:34.668 "uuid": "d137d8c5-371f-4979-b88e-33d5160ad46f", 00:13:34.668 "is_configured": false, 00:13:34.668 "data_offset": 0, 00:13:34.668 "data_size": 63488 00:13:34.668 }, 00:13:34.668 { 00:13:34.668 "name": "BaseBdev3", 00:13:34.668 "uuid": "37967f48-daf5-4050-b160-e63810d10de7", 00:13:34.668 "is_configured": true, 00:13:34.668 "data_offset": 2048, 00:13:34.668 "data_size": 63488 00:13:34.668 } 00:13:34.668 ] 00:13:34.668 }' 00:13:34.668 23:31:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:34.668 23:31:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:34.928 23:31:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:34.928 23:31:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.928 23:31:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:13:34.928 23:31:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:34.928 23:31:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.928 23:31:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:13:34.928 23:31:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:13:34.928 23:31:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.928 23:31:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:34.928 [2024-09-30 23:31:14.748630] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:34.928 23:31:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.928 23:31:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:13:34.928 23:31:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:34.928 23:31:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:34.928 23:31:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:34.928 23:31:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:34.928 23:31:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:34.929 23:31:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:34.929 23:31:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:34.929 23:31:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:34.929 23:31:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:34.929 23:31:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:34.929 23:31:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.929 23:31:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:34.929 23:31:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:34.929 23:31:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:35.188 23:31:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:35.188 "name": "Existed_Raid", 00:13:35.188 "uuid": "359600b2-b1ec-40b2-8106-5542d1b1ce1a", 00:13:35.188 "strip_size_kb": 64, 00:13:35.188 "state": "configuring", 00:13:35.189 "raid_level": "raid5f", 00:13:35.189 "superblock": true, 00:13:35.189 "num_base_bdevs": 3, 00:13:35.189 "num_base_bdevs_discovered": 2, 00:13:35.189 "num_base_bdevs_operational": 3, 00:13:35.189 "base_bdevs_list": [ 00:13:35.189 { 00:13:35.189 "name": null, 00:13:35.189 "uuid": "56bb48e5-45d8-4233-8af4-1115128d5d09", 00:13:35.189 "is_configured": false, 00:13:35.189 "data_offset": 0, 00:13:35.189 "data_size": 63488 00:13:35.189 }, 00:13:35.189 { 00:13:35.189 "name": "BaseBdev2", 00:13:35.189 "uuid": "d137d8c5-371f-4979-b88e-33d5160ad46f", 00:13:35.189 "is_configured": true, 00:13:35.189 "data_offset": 2048, 00:13:35.189 "data_size": 63488 00:13:35.189 }, 00:13:35.189 { 00:13:35.189 "name": "BaseBdev3", 00:13:35.189 "uuid": "37967f48-daf5-4050-b160-e63810d10de7", 00:13:35.189 "is_configured": true, 00:13:35.189 "data_offset": 2048, 00:13:35.189 "data_size": 63488 00:13:35.189 } 00:13:35.189 ] 00:13:35.189 }' 00:13:35.189 23:31:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:35.189 23:31:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:35.448 23:31:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:13:35.448 23:31:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:35.448 23:31:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:35.448 23:31:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:35.448 23:31:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:35.448 23:31:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:13:35.448 23:31:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:13:35.448 23:31:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:35.448 23:31:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:35.448 23:31:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:35.448 23:31:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:35.448 23:31:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 56bb48e5-45d8-4233-8af4-1115128d5d09 00:13:35.448 23:31:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:35.448 23:31:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:35.708 [2024-09-30 23:31:15.310899] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:13:35.708 [2024-09-30 23:31:15.311087] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:13:35.708 [2024-09-30 23:31:15.311105] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:13:35.708 NewBaseBdev 00:13:35.709 [2024-09-30 23:31:15.311401] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:13:35.709 [2024-09-30 23:31:15.311845] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:13:35.709 [2024-09-30 23:31:15.311884] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006d00 00:13:35.709 [2024-09-30 23:31:15.312003] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:35.709 23:31:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:35.709 23:31:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:13:35.709 23:31:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:13:35.709 23:31:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:35.709 23:31:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:13:35.709 23:31:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:35.709 23:31:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:35.709 23:31:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:13:35.709 23:31:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:35.709 23:31:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:35.709 23:31:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:35.709 23:31:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:13:35.709 23:31:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:35.709 23:31:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:35.709 [ 00:13:35.709 { 00:13:35.709 "name": "NewBaseBdev", 00:13:35.709 "aliases": [ 00:13:35.709 "56bb48e5-45d8-4233-8af4-1115128d5d09" 00:13:35.709 ], 00:13:35.709 "product_name": "Malloc disk", 00:13:35.709 "block_size": 512, 00:13:35.709 "num_blocks": 65536, 00:13:35.709 "uuid": "56bb48e5-45d8-4233-8af4-1115128d5d09", 00:13:35.709 "assigned_rate_limits": { 00:13:35.709 "rw_ios_per_sec": 0, 00:13:35.709 "rw_mbytes_per_sec": 0, 00:13:35.709 "r_mbytes_per_sec": 0, 00:13:35.709 "w_mbytes_per_sec": 0 00:13:35.709 }, 00:13:35.709 "claimed": true, 00:13:35.709 "claim_type": "exclusive_write", 00:13:35.709 "zoned": false, 00:13:35.709 "supported_io_types": { 00:13:35.709 "read": true, 00:13:35.709 "write": true, 00:13:35.709 "unmap": true, 00:13:35.709 "flush": true, 00:13:35.709 "reset": true, 00:13:35.709 "nvme_admin": false, 00:13:35.709 "nvme_io": false, 00:13:35.709 "nvme_io_md": false, 00:13:35.709 "write_zeroes": true, 00:13:35.709 "zcopy": true, 00:13:35.709 "get_zone_info": false, 00:13:35.709 "zone_management": false, 00:13:35.709 "zone_append": false, 00:13:35.709 "compare": false, 00:13:35.709 "compare_and_write": false, 00:13:35.709 "abort": true, 00:13:35.709 "seek_hole": false, 00:13:35.709 "seek_data": false, 00:13:35.709 "copy": true, 00:13:35.709 "nvme_iov_md": false 00:13:35.709 }, 00:13:35.709 "memory_domains": [ 00:13:35.709 { 00:13:35.709 "dma_device_id": "system", 00:13:35.709 "dma_device_type": 1 00:13:35.709 }, 00:13:35.709 { 00:13:35.709 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:35.709 "dma_device_type": 2 00:13:35.709 } 00:13:35.709 ], 00:13:35.709 "driver_specific": {} 00:13:35.709 } 00:13:35.709 ] 00:13:35.709 23:31:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:35.709 23:31:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:13:35.709 23:31:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:13:35.709 23:31:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:35.709 23:31:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:35.709 23:31:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:35.709 23:31:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:35.709 23:31:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:35.709 23:31:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:35.709 23:31:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:35.709 23:31:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:35.709 23:31:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:35.709 23:31:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:35.709 23:31:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:35.709 23:31:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:35.709 23:31:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:35.709 23:31:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:35.709 23:31:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:35.709 "name": "Existed_Raid", 00:13:35.709 "uuid": "359600b2-b1ec-40b2-8106-5542d1b1ce1a", 00:13:35.709 "strip_size_kb": 64, 00:13:35.709 "state": "online", 00:13:35.709 "raid_level": "raid5f", 00:13:35.709 "superblock": true, 00:13:35.709 "num_base_bdevs": 3, 00:13:35.709 "num_base_bdevs_discovered": 3, 00:13:35.709 "num_base_bdevs_operational": 3, 00:13:35.709 "base_bdevs_list": [ 00:13:35.709 { 00:13:35.709 "name": "NewBaseBdev", 00:13:35.709 "uuid": "56bb48e5-45d8-4233-8af4-1115128d5d09", 00:13:35.709 "is_configured": true, 00:13:35.709 "data_offset": 2048, 00:13:35.709 "data_size": 63488 00:13:35.709 }, 00:13:35.709 { 00:13:35.709 "name": "BaseBdev2", 00:13:35.709 "uuid": "d137d8c5-371f-4979-b88e-33d5160ad46f", 00:13:35.709 "is_configured": true, 00:13:35.709 "data_offset": 2048, 00:13:35.709 "data_size": 63488 00:13:35.709 }, 00:13:35.709 { 00:13:35.709 "name": "BaseBdev3", 00:13:35.709 "uuid": "37967f48-daf5-4050-b160-e63810d10de7", 00:13:35.709 "is_configured": true, 00:13:35.709 "data_offset": 2048, 00:13:35.709 "data_size": 63488 00:13:35.709 } 00:13:35.709 ] 00:13:35.709 }' 00:13:35.709 23:31:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:35.709 23:31:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:35.969 23:31:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:13:35.969 23:31:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:13:35.969 23:31:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:35.969 23:31:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:35.969 23:31:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:13:35.969 23:31:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:35.969 23:31:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:13:35.969 23:31:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:35.969 23:31:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:35.969 23:31:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:35.969 [2024-09-30 23:31:15.782259] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:35.969 23:31:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:35.969 23:31:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:35.969 "name": "Existed_Raid", 00:13:35.969 "aliases": [ 00:13:35.969 "359600b2-b1ec-40b2-8106-5542d1b1ce1a" 00:13:35.969 ], 00:13:35.969 "product_name": "Raid Volume", 00:13:35.969 "block_size": 512, 00:13:35.969 "num_blocks": 126976, 00:13:35.969 "uuid": "359600b2-b1ec-40b2-8106-5542d1b1ce1a", 00:13:35.969 "assigned_rate_limits": { 00:13:35.969 "rw_ios_per_sec": 0, 00:13:35.969 "rw_mbytes_per_sec": 0, 00:13:35.969 "r_mbytes_per_sec": 0, 00:13:35.969 "w_mbytes_per_sec": 0 00:13:35.969 }, 00:13:35.969 "claimed": false, 00:13:35.969 "zoned": false, 00:13:35.969 "supported_io_types": { 00:13:35.969 "read": true, 00:13:35.969 "write": true, 00:13:35.969 "unmap": false, 00:13:35.969 "flush": false, 00:13:35.969 "reset": true, 00:13:35.969 "nvme_admin": false, 00:13:35.969 "nvme_io": false, 00:13:35.969 "nvme_io_md": false, 00:13:35.969 "write_zeroes": true, 00:13:35.969 "zcopy": false, 00:13:35.969 "get_zone_info": false, 00:13:35.969 "zone_management": false, 00:13:35.969 "zone_append": false, 00:13:35.969 "compare": false, 00:13:35.969 "compare_and_write": false, 00:13:35.969 "abort": false, 00:13:35.969 "seek_hole": false, 00:13:35.969 "seek_data": false, 00:13:35.969 "copy": false, 00:13:35.969 "nvme_iov_md": false 00:13:35.969 }, 00:13:35.969 "driver_specific": { 00:13:35.969 "raid": { 00:13:35.969 "uuid": "359600b2-b1ec-40b2-8106-5542d1b1ce1a", 00:13:35.969 "strip_size_kb": 64, 00:13:35.969 "state": "online", 00:13:35.969 "raid_level": "raid5f", 00:13:35.969 "superblock": true, 00:13:35.969 "num_base_bdevs": 3, 00:13:35.969 "num_base_bdevs_discovered": 3, 00:13:35.969 "num_base_bdevs_operational": 3, 00:13:35.969 "base_bdevs_list": [ 00:13:35.969 { 00:13:35.969 "name": "NewBaseBdev", 00:13:35.969 "uuid": "56bb48e5-45d8-4233-8af4-1115128d5d09", 00:13:35.969 "is_configured": true, 00:13:35.969 "data_offset": 2048, 00:13:35.969 "data_size": 63488 00:13:35.969 }, 00:13:35.969 { 00:13:35.969 "name": "BaseBdev2", 00:13:35.969 "uuid": "d137d8c5-371f-4979-b88e-33d5160ad46f", 00:13:35.969 "is_configured": true, 00:13:35.969 "data_offset": 2048, 00:13:35.969 "data_size": 63488 00:13:35.969 }, 00:13:35.969 { 00:13:35.969 "name": "BaseBdev3", 00:13:35.969 "uuid": "37967f48-daf5-4050-b160-e63810d10de7", 00:13:35.969 "is_configured": true, 00:13:35.969 "data_offset": 2048, 00:13:35.969 "data_size": 63488 00:13:35.969 } 00:13:35.969 ] 00:13:35.969 } 00:13:35.969 } 00:13:35.969 }' 00:13:36.230 23:31:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:36.230 23:31:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:13:36.230 BaseBdev2 00:13:36.230 BaseBdev3' 00:13:36.230 23:31:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:36.230 23:31:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:36.230 23:31:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:36.230 23:31:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:13:36.230 23:31:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:36.230 23:31:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:36.230 23:31:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:36.230 23:31:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:36.230 23:31:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:36.230 23:31:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:36.230 23:31:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:36.230 23:31:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:36.230 23:31:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:13:36.230 23:31:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:36.230 23:31:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:36.230 23:31:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:36.230 23:31:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:36.230 23:31:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:36.230 23:31:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:36.230 23:31:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:13:36.230 23:31:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:36.230 23:31:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:36.230 23:31:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:36.230 23:31:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:36.230 23:31:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:36.230 23:31:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:36.230 23:31:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:36.230 23:31:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:36.230 23:31:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:36.230 [2024-09-30 23:31:16.021717] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:36.230 [2024-09-30 23:31:16.021749] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:36.230 [2024-09-30 23:31:16.021815] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:36.230 [2024-09-30 23:31:16.022063] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:36.230 [2024-09-30 23:31:16.022088] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name Existed_Raid, state offline 00:13:36.230 23:31:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:36.230 23:31:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 91103 00:13:36.230 23:31:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 91103 ']' 00:13:36.230 23:31:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 91103 00:13:36.230 23:31:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:13:36.230 23:31:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:36.230 23:31:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 91103 00:13:36.230 23:31:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:36.230 23:31:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:36.230 killing process with pid 91103 00:13:36.230 23:31:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 91103' 00:13:36.230 23:31:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 91103 00:13:36.230 [2024-09-30 23:31:16.066383] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:36.230 23:31:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 91103 00:13:36.489 [2024-09-30 23:31:16.098085] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:36.749 23:31:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:13:36.749 00:13:36.749 real 0m8.755s 00:13:36.749 user 0m14.830s 00:13:36.749 sys 0m1.904s 00:13:36.749 23:31:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:36.749 23:31:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:36.749 ************************************ 00:13:36.749 END TEST raid5f_state_function_test_sb 00:13:36.749 ************************************ 00:13:36.749 23:31:16 bdev_raid -- bdev/bdev_raid.sh@988 -- # run_test raid5f_superblock_test raid_superblock_test raid5f 3 00:13:36.749 23:31:16 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:13:36.749 23:31:16 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:36.749 23:31:16 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:36.749 ************************************ 00:13:36.749 START TEST raid5f_superblock_test 00:13:36.749 ************************************ 00:13:36.749 23:31:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test raid5f 3 00:13:36.749 23:31:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid5f 00:13:36.749 23:31:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:13:36.749 23:31:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:13:36.749 23:31:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:13:36.749 23:31:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:13:36.749 23:31:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:13:36.750 23:31:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:13:36.750 23:31:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:13:36.750 23:31:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:13:36.750 23:31:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:13:36.750 23:31:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:13:36.750 23:31:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:13:36.750 23:31:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:13:36.750 23:31:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid5f '!=' raid1 ']' 00:13:36.750 23:31:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:13:36.750 23:31:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:13:36.750 23:31:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=91707 00:13:36.750 23:31:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:13:36.750 23:31:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 91707 00:13:36.750 23:31:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 91707 ']' 00:13:36.750 23:31:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:36.750 23:31:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:36.750 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:36.750 23:31:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:36.750 23:31:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:36.750 23:31:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:36.750 [2024-09-30 23:31:16.520747] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:13:36.750 [2024-09-30 23:31:16.520907] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91707 ] 00:13:37.009 [2024-09-30 23:31:16.661440] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:37.009 [2024-09-30 23:31:16.705159] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:13:37.009 [2024-09-30 23:31:16.748898] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:37.009 [2024-09-30 23:31:16.748947] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:37.580 23:31:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:37.580 23:31:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:13:37.580 23:31:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:13:37.580 23:31:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:13:37.580 23:31:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:13:37.580 23:31:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:13:37.580 23:31:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:13:37.580 23:31:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:13:37.580 23:31:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:13:37.580 23:31:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:13:37.580 23:31:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:13:37.580 23:31:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:37.580 23:31:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:37.580 malloc1 00:13:37.580 23:31:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:37.580 23:31:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:13:37.580 23:31:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:37.580 23:31:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:37.580 [2024-09-30 23:31:17.391942] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:13:37.580 [2024-09-30 23:31:17.392019] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:37.580 [2024-09-30 23:31:17.392042] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:13:37.580 [2024-09-30 23:31:17.392068] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:37.580 [2024-09-30 23:31:17.394165] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:37.580 [2024-09-30 23:31:17.394208] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:13:37.580 pt1 00:13:37.580 23:31:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:37.580 23:31:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:13:37.580 23:31:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:13:37.580 23:31:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:13:37.580 23:31:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:13:37.580 23:31:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:13:37.580 23:31:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:13:37.580 23:31:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:13:37.580 23:31:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:13:37.580 23:31:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:13:37.580 23:31:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:37.580 23:31:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:37.580 malloc2 00:13:37.580 23:31:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:37.580 23:31:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:37.580 23:31:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:37.580 23:31:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:37.840 [2024-09-30 23:31:17.436212] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:37.840 [2024-09-30 23:31:17.436319] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:37.840 [2024-09-30 23:31:17.436360] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:13:37.840 [2024-09-30 23:31:17.436391] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:37.840 [2024-09-30 23:31:17.441004] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:37.840 [2024-09-30 23:31:17.441077] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:37.840 pt2 00:13:37.840 23:31:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:37.840 23:31:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:13:37.840 23:31:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:13:37.840 23:31:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:13:37.840 23:31:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:13:37.840 23:31:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:13:37.840 23:31:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:13:37.840 23:31:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:13:37.840 23:31:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:13:37.840 23:31:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:13:37.840 23:31:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:37.840 23:31:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:37.840 malloc3 00:13:37.840 23:31:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:37.840 23:31:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:13:37.840 23:31:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:37.840 23:31:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:37.840 [2024-09-30 23:31:17.467545] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:13:37.840 [2024-09-30 23:31:17.467599] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:37.840 [2024-09-30 23:31:17.467619] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:13:37.840 [2024-09-30 23:31:17.467632] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:37.840 [2024-09-30 23:31:17.469729] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:37.840 [2024-09-30 23:31:17.469769] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:13:37.840 pt3 00:13:37.840 23:31:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:37.840 23:31:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:13:37.840 23:31:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:13:37.840 23:31:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:13:37.840 23:31:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:37.840 23:31:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:37.840 [2024-09-30 23:31:17.479576] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:13:37.840 [2024-09-30 23:31:17.481424] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:37.840 [2024-09-30 23:31:17.481501] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:13:37.840 [2024-09-30 23:31:17.481668] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:13:37.840 [2024-09-30 23:31:17.481681] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:13:37.840 [2024-09-30 23:31:17.481985] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:13:37.840 [2024-09-30 23:31:17.482417] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:13:37.840 [2024-09-30 23:31:17.482446] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:13:37.840 [2024-09-30 23:31:17.482579] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:37.840 23:31:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:37.840 23:31:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:13:37.840 23:31:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:37.840 23:31:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:37.840 23:31:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:37.840 23:31:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:37.840 23:31:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:37.840 23:31:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:37.840 23:31:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:37.840 23:31:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:37.840 23:31:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:37.840 23:31:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:37.841 23:31:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:37.841 23:31:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:37.841 23:31:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:37.841 23:31:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:37.841 23:31:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:37.841 "name": "raid_bdev1", 00:13:37.841 "uuid": "4d30c93b-3198-4631-a7bd-1bba18bbf10f", 00:13:37.841 "strip_size_kb": 64, 00:13:37.841 "state": "online", 00:13:37.841 "raid_level": "raid5f", 00:13:37.841 "superblock": true, 00:13:37.841 "num_base_bdevs": 3, 00:13:37.841 "num_base_bdevs_discovered": 3, 00:13:37.841 "num_base_bdevs_operational": 3, 00:13:37.841 "base_bdevs_list": [ 00:13:37.841 { 00:13:37.841 "name": "pt1", 00:13:37.841 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:37.841 "is_configured": true, 00:13:37.841 "data_offset": 2048, 00:13:37.841 "data_size": 63488 00:13:37.841 }, 00:13:37.841 { 00:13:37.841 "name": "pt2", 00:13:37.841 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:37.841 "is_configured": true, 00:13:37.841 "data_offset": 2048, 00:13:37.841 "data_size": 63488 00:13:37.841 }, 00:13:37.841 { 00:13:37.841 "name": "pt3", 00:13:37.841 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:37.841 "is_configured": true, 00:13:37.841 "data_offset": 2048, 00:13:37.841 "data_size": 63488 00:13:37.841 } 00:13:37.841 ] 00:13:37.841 }' 00:13:37.841 23:31:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:37.841 23:31:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:38.101 23:31:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:13:38.101 23:31:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:13:38.101 23:31:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:38.101 23:31:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:38.101 23:31:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:13:38.101 23:31:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:38.101 23:31:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:38.101 23:31:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:38.101 23:31:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:38.101 23:31:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:38.101 [2024-09-30 23:31:17.931377] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:38.101 23:31:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:38.361 23:31:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:38.361 "name": "raid_bdev1", 00:13:38.361 "aliases": [ 00:13:38.361 "4d30c93b-3198-4631-a7bd-1bba18bbf10f" 00:13:38.361 ], 00:13:38.361 "product_name": "Raid Volume", 00:13:38.361 "block_size": 512, 00:13:38.361 "num_blocks": 126976, 00:13:38.361 "uuid": "4d30c93b-3198-4631-a7bd-1bba18bbf10f", 00:13:38.361 "assigned_rate_limits": { 00:13:38.361 "rw_ios_per_sec": 0, 00:13:38.361 "rw_mbytes_per_sec": 0, 00:13:38.361 "r_mbytes_per_sec": 0, 00:13:38.361 "w_mbytes_per_sec": 0 00:13:38.361 }, 00:13:38.361 "claimed": false, 00:13:38.361 "zoned": false, 00:13:38.361 "supported_io_types": { 00:13:38.361 "read": true, 00:13:38.361 "write": true, 00:13:38.361 "unmap": false, 00:13:38.361 "flush": false, 00:13:38.361 "reset": true, 00:13:38.361 "nvme_admin": false, 00:13:38.361 "nvme_io": false, 00:13:38.361 "nvme_io_md": false, 00:13:38.361 "write_zeroes": true, 00:13:38.361 "zcopy": false, 00:13:38.361 "get_zone_info": false, 00:13:38.361 "zone_management": false, 00:13:38.361 "zone_append": false, 00:13:38.361 "compare": false, 00:13:38.361 "compare_and_write": false, 00:13:38.362 "abort": false, 00:13:38.362 "seek_hole": false, 00:13:38.362 "seek_data": false, 00:13:38.362 "copy": false, 00:13:38.362 "nvme_iov_md": false 00:13:38.362 }, 00:13:38.362 "driver_specific": { 00:13:38.362 "raid": { 00:13:38.362 "uuid": "4d30c93b-3198-4631-a7bd-1bba18bbf10f", 00:13:38.362 "strip_size_kb": 64, 00:13:38.362 "state": "online", 00:13:38.362 "raid_level": "raid5f", 00:13:38.362 "superblock": true, 00:13:38.362 "num_base_bdevs": 3, 00:13:38.362 "num_base_bdevs_discovered": 3, 00:13:38.362 "num_base_bdevs_operational": 3, 00:13:38.362 "base_bdevs_list": [ 00:13:38.362 { 00:13:38.362 "name": "pt1", 00:13:38.362 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:38.362 "is_configured": true, 00:13:38.362 "data_offset": 2048, 00:13:38.362 "data_size": 63488 00:13:38.362 }, 00:13:38.362 { 00:13:38.362 "name": "pt2", 00:13:38.362 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:38.362 "is_configured": true, 00:13:38.362 "data_offset": 2048, 00:13:38.362 "data_size": 63488 00:13:38.362 }, 00:13:38.362 { 00:13:38.362 "name": "pt3", 00:13:38.362 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:38.362 "is_configured": true, 00:13:38.362 "data_offset": 2048, 00:13:38.362 "data_size": 63488 00:13:38.362 } 00:13:38.362 ] 00:13:38.362 } 00:13:38.362 } 00:13:38.362 }' 00:13:38.362 23:31:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:38.362 23:31:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:13:38.362 pt2 00:13:38.362 pt3' 00:13:38.362 23:31:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:38.362 23:31:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:38.362 23:31:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:38.362 23:31:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:38.362 23:31:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:13:38.362 23:31:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:38.362 23:31:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:38.362 23:31:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:38.362 23:31:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:38.362 23:31:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:38.362 23:31:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:38.362 23:31:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:13:38.362 23:31:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:38.362 23:31:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:38.362 23:31:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:38.362 23:31:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:38.362 23:31:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:38.362 23:31:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:38.362 23:31:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:38.362 23:31:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:13:38.362 23:31:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:38.362 23:31:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:38.362 23:31:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:38.362 23:31:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:38.362 23:31:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:38.362 23:31:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:38.362 23:31:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:38.362 23:31:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:38.362 23:31:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:38.362 23:31:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:13:38.362 [2024-09-30 23:31:18.174934] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:38.362 23:31:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:38.623 23:31:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=4d30c93b-3198-4631-a7bd-1bba18bbf10f 00:13:38.623 23:31:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 4d30c93b-3198-4631-a7bd-1bba18bbf10f ']' 00:13:38.623 23:31:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:38.623 23:31:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:38.623 23:31:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:38.623 [2024-09-30 23:31:18.222668] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:38.623 [2024-09-30 23:31:18.222702] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:38.623 [2024-09-30 23:31:18.222781] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:38.623 [2024-09-30 23:31:18.222852] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:38.623 [2024-09-30 23:31:18.222882] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:13:38.623 23:31:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:38.623 23:31:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:13:38.623 23:31:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:38.623 23:31:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:38.623 23:31:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:38.623 23:31:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:38.623 23:31:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:13:38.623 23:31:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:13:38.623 23:31:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:13:38.623 23:31:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:13:38.623 23:31:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:38.623 23:31:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:38.623 23:31:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:38.623 23:31:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:13:38.623 23:31:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:13:38.623 23:31:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:38.623 23:31:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:38.623 23:31:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:38.623 23:31:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:13:38.623 23:31:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:13:38.623 23:31:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:38.623 23:31:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:38.623 23:31:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:38.623 23:31:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:13:38.623 23:31:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:38.623 23:31:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:38.623 23:31:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:13:38.623 23:31:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:38.623 23:31:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:13:38.623 23:31:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:13:38.623 23:31:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:13:38.623 23:31:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:13:38.623 23:31:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:13:38.623 23:31:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:38.623 23:31:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:13:38.623 23:31:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:38.623 23:31:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:13:38.623 23:31:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:38.623 23:31:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:38.623 [2024-09-30 23:31:18.374433] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:13:38.623 [2024-09-30 23:31:18.376320] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:13:38.623 [2024-09-30 23:31:18.376377] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:13:38.623 [2024-09-30 23:31:18.376430] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:13:38.623 [2024-09-30 23:31:18.376477] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:13:38.623 [2024-09-30 23:31:18.376500] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:13:38.623 [2024-09-30 23:31:18.376517] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:38.623 [2024-09-30 23:31:18.376531] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state configuring 00:13:38.623 request: 00:13:38.623 { 00:13:38.623 "name": "raid_bdev1", 00:13:38.623 "raid_level": "raid5f", 00:13:38.623 "base_bdevs": [ 00:13:38.623 "malloc1", 00:13:38.623 "malloc2", 00:13:38.623 "malloc3" 00:13:38.623 ], 00:13:38.623 "strip_size_kb": 64, 00:13:38.623 "superblock": false, 00:13:38.623 "method": "bdev_raid_create", 00:13:38.623 "req_id": 1 00:13:38.623 } 00:13:38.623 Got JSON-RPC error response 00:13:38.623 response: 00:13:38.623 { 00:13:38.624 "code": -17, 00:13:38.624 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:13:38.624 } 00:13:38.624 23:31:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:13:38.624 23:31:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:13:38.624 23:31:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:38.624 23:31:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:38.624 23:31:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:38.624 23:31:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:38.624 23:31:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:13:38.624 23:31:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:38.624 23:31:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:38.624 23:31:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:38.624 23:31:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:13:38.624 23:31:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:13:38.624 23:31:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:13:38.624 23:31:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:38.624 23:31:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:38.624 [2024-09-30 23:31:18.434301] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:13:38.624 [2024-09-30 23:31:18.434351] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:38.624 [2024-09-30 23:31:18.434367] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:13:38.624 [2024-09-30 23:31:18.434379] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:38.624 [2024-09-30 23:31:18.436508] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:38.624 [2024-09-30 23:31:18.436560] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:13:38.624 [2024-09-30 23:31:18.436627] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:13:38.624 [2024-09-30 23:31:18.436663] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:13:38.624 pt1 00:13:38.624 23:31:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:38.624 23:31:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:13:38.624 23:31:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:38.624 23:31:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:38.624 23:31:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:38.624 23:31:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:38.624 23:31:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:38.624 23:31:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:38.624 23:31:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:38.624 23:31:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:38.624 23:31:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:38.624 23:31:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:38.624 23:31:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:38.624 23:31:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:38.624 23:31:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:38.624 23:31:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:38.884 23:31:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:38.884 "name": "raid_bdev1", 00:13:38.884 "uuid": "4d30c93b-3198-4631-a7bd-1bba18bbf10f", 00:13:38.884 "strip_size_kb": 64, 00:13:38.884 "state": "configuring", 00:13:38.884 "raid_level": "raid5f", 00:13:38.884 "superblock": true, 00:13:38.884 "num_base_bdevs": 3, 00:13:38.884 "num_base_bdevs_discovered": 1, 00:13:38.884 "num_base_bdevs_operational": 3, 00:13:38.884 "base_bdevs_list": [ 00:13:38.884 { 00:13:38.884 "name": "pt1", 00:13:38.884 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:38.884 "is_configured": true, 00:13:38.884 "data_offset": 2048, 00:13:38.884 "data_size": 63488 00:13:38.884 }, 00:13:38.884 { 00:13:38.884 "name": null, 00:13:38.884 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:38.884 "is_configured": false, 00:13:38.884 "data_offset": 2048, 00:13:38.884 "data_size": 63488 00:13:38.884 }, 00:13:38.884 { 00:13:38.884 "name": null, 00:13:38.884 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:38.884 "is_configured": false, 00:13:38.884 "data_offset": 2048, 00:13:38.884 "data_size": 63488 00:13:38.884 } 00:13:38.884 ] 00:13:38.884 }' 00:13:38.884 23:31:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:38.884 23:31:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:39.144 23:31:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:13:39.144 23:31:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:39.144 23:31:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.144 23:31:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:39.144 [2024-09-30 23:31:18.853665] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:39.144 [2024-09-30 23:31:18.853719] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:39.144 [2024-09-30 23:31:18.853738] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:13:39.144 [2024-09-30 23:31:18.853752] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:39.144 [2024-09-30 23:31:18.854105] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:39.144 [2024-09-30 23:31:18.854128] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:39.144 [2024-09-30 23:31:18.854188] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:13:39.144 [2024-09-30 23:31:18.854211] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:39.144 pt2 00:13:39.144 23:31:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.144 23:31:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:13:39.144 23:31:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.144 23:31:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:39.144 [2024-09-30 23:31:18.861670] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:13:39.144 23:31:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.144 23:31:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:13:39.144 23:31:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:39.144 23:31:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:39.144 23:31:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:39.144 23:31:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:39.144 23:31:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:39.144 23:31:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:39.144 23:31:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:39.144 23:31:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:39.144 23:31:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:39.144 23:31:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:39.144 23:31:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:39.144 23:31:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.144 23:31:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:39.144 23:31:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.144 23:31:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:39.144 "name": "raid_bdev1", 00:13:39.144 "uuid": "4d30c93b-3198-4631-a7bd-1bba18bbf10f", 00:13:39.144 "strip_size_kb": 64, 00:13:39.144 "state": "configuring", 00:13:39.144 "raid_level": "raid5f", 00:13:39.144 "superblock": true, 00:13:39.144 "num_base_bdevs": 3, 00:13:39.144 "num_base_bdevs_discovered": 1, 00:13:39.144 "num_base_bdevs_operational": 3, 00:13:39.144 "base_bdevs_list": [ 00:13:39.144 { 00:13:39.144 "name": "pt1", 00:13:39.144 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:39.144 "is_configured": true, 00:13:39.144 "data_offset": 2048, 00:13:39.144 "data_size": 63488 00:13:39.144 }, 00:13:39.144 { 00:13:39.144 "name": null, 00:13:39.144 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:39.144 "is_configured": false, 00:13:39.144 "data_offset": 0, 00:13:39.144 "data_size": 63488 00:13:39.144 }, 00:13:39.144 { 00:13:39.144 "name": null, 00:13:39.144 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:39.144 "is_configured": false, 00:13:39.144 "data_offset": 2048, 00:13:39.144 "data_size": 63488 00:13:39.144 } 00:13:39.144 ] 00:13:39.144 }' 00:13:39.144 23:31:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:39.144 23:31:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:39.712 23:31:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:13:39.712 23:31:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:13:39.712 23:31:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:39.712 23:31:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.712 23:31:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:39.712 [2024-09-30 23:31:19.296911] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:39.712 [2024-09-30 23:31:19.296963] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:39.712 [2024-09-30 23:31:19.296983] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:13:39.712 [2024-09-30 23:31:19.296993] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:39.712 [2024-09-30 23:31:19.297348] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:39.712 [2024-09-30 23:31:19.297367] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:39.712 [2024-09-30 23:31:19.297432] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:13:39.712 [2024-09-30 23:31:19.297451] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:39.712 pt2 00:13:39.712 23:31:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.712 23:31:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:13:39.712 23:31:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:13:39.712 23:31:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:13:39.712 23:31:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.712 23:31:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:39.712 [2024-09-30 23:31:19.304893] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:13:39.712 [2024-09-30 23:31:19.304935] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:39.712 [2024-09-30 23:31:19.304953] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:13:39.712 [2024-09-30 23:31:19.304963] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:39.712 [2024-09-30 23:31:19.305285] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:39.712 [2024-09-30 23:31:19.305327] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:13:39.713 [2024-09-30 23:31:19.305386] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:13:39.713 [2024-09-30 23:31:19.305404] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:13:39.713 [2024-09-30 23:31:19.305500] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:13:39.713 [2024-09-30 23:31:19.305517] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:13:39.713 [2024-09-30 23:31:19.305738] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:13:39.713 [2024-09-30 23:31:19.306161] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:13:39.713 [2024-09-30 23:31:19.306214] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006980 00:13:39.713 [2024-09-30 23:31:19.306319] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:39.713 pt3 00:13:39.713 23:31:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.713 23:31:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:13:39.713 23:31:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:13:39.713 23:31:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:13:39.713 23:31:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:39.713 23:31:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:39.713 23:31:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:39.713 23:31:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:39.713 23:31:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:39.713 23:31:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:39.713 23:31:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:39.713 23:31:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:39.713 23:31:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:39.713 23:31:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:39.713 23:31:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.713 23:31:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:39.713 23:31:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:39.713 23:31:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.713 23:31:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:39.713 "name": "raid_bdev1", 00:13:39.713 "uuid": "4d30c93b-3198-4631-a7bd-1bba18bbf10f", 00:13:39.713 "strip_size_kb": 64, 00:13:39.713 "state": "online", 00:13:39.713 "raid_level": "raid5f", 00:13:39.713 "superblock": true, 00:13:39.713 "num_base_bdevs": 3, 00:13:39.713 "num_base_bdevs_discovered": 3, 00:13:39.713 "num_base_bdevs_operational": 3, 00:13:39.713 "base_bdevs_list": [ 00:13:39.713 { 00:13:39.713 "name": "pt1", 00:13:39.713 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:39.713 "is_configured": true, 00:13:39.713 "data_offset": 2048, 00:13:39.713 "data_size": 63488 00:13:39.713 }, 00:13:39.713 { 00:13:39.713 "name": "pt2", 00:13:39.713 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:39.713 "is_configured": true, 00:13:39.713 "data_offset": 2048, 00:13:39.713 "data_size": 63488 00:13:39.713 }, 00:13:39.713 { 00:13:39.713 "name": "pt3", 00:13:39.713 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:39.713 "is_configured": true, 00:13:39.713 "data_offset": 2048, 00:13:39.713 "data_size": 63488 00:13:39.713 } 00:13:39.713 ] 00:13:39.713 }' 00:13:39.713 23:31:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:39.713 23:31:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:39.973 23:31:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:13:39.973 23:31:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:13:39.973 23:31:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:39.973 23:31:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:39.973 23:31:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:13:39.973 23:31:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:39.973 23:31:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:39.973 23:31:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.973 23:31:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:39.973 23:31:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:39.973 [2024-09-30 23:31:19.732335] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:39.973 23:31:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.973 23:31:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:39.973 "name": "raid_bdev1", 00:13:39.973 "aliases": [ 00:13:39.973 "4d30c93b-3198-4631-a7bd-1bba18bbf10f" 00:13:39.973 ], 00:13:39.973 "product_name": "Raid Volume", 00:13:39.973 "block_size": 512, 00:13:39.973 "num_blocks": 126976, 00:13:39.973 "uuid": "4d30c93b-3198-4631-a7bd-1bba18bbf10f", 00:13:39.973 "assigned_rate_limits": { 00:13:39.973 "rw_ios_per_sec": 0, 00:13:39.973 "rw_mbytes_per_sec": 0, 00:13:39.973 "r_mbytes_per_sec": 0, 00:13:39.973 "w_mbytes_per_sec": 0 00:13:39.973 }, 00:13:39.973 "claimed": false, 00:13:39.973 "zoned": false, 00:13:39.973 "supported_io_types": { 00:13:39.973 "read": true, 00:13:39.973 "write": true, 00:13:39.973 "unmap": false, 00:13:39.973 "flush": false, 00:13:39.973 "reset": true, 00:13:39.973 "nvme_admin": false, 00:13:39.973 "nvme_io": false, 00:13:39.973 "nvme_io_md": false, 00:13:39.973 "write_zeroes": true, 00:13:39.973 "zcopy": false, 00:13:39.973 "get_zone_info": false, 00:13:39.973 "zone_management": false, 00:13:39.973 "zone_append": false, 00:13:39.973 "compare": false, 00:13:39.973 "compare_and_write": false, 00:13:39.973 "abort": false, 00:13:39.973 "seek_hole": false, 00:13:39.973 "seek_data": false, 00:13:39.973 "copy": false, 00:13:39.973 "nvme_iov_md": false 00:13:39.973 }, 00:13:39.973 "driver_specific": { 00:13:39.973 "raid": { 00:13:39.973 "uuid": "4d30c93b-3198-4631-a7bd-1bba18bbf10f", 00:13:39.973 "strip_size_kb": 64, 00:13:39.973 "state": "online", 00:13:39.973 "raid_level": "raid5f", 00:13:39.973 "superblock": true, 00:13:39.973 "num_base_bdevs": 3, 00:13:39.973 "num_base_bdevs_discovered": 3, 00:13:39.973 "num_base_bdevs_operational": 3, 00:13:39.973 "base_bdevs_list": [ 00:13:39.973 { 00:13:39.973 "name": "pt1", 00:13:39.973 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:39.973 "is_configured": true, 00:13:39.973 "data_offset": 2048, 00:13:39.973 "data_size": 63488 00:13:39.973 }, 00:13:39.973 { 00:13:39.973 "name": "pt2", 00:13:39.973 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:39.973 "is_configured": true, 00:13:39.973 "data_offset": 2048, 00:13:39.973 "data_size": 63488 00:13:39.973 }, 00:13:39.973 { 00:13:39.973 "name": "pt3", 00:13:39.973 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:39.973 "is_configured": true, 00:13:39.973 "data_offset": 2048, 00:13:39.973 "data_size": 63488 00:13:39.973 } 00:13:39.973 ] 00:13:39.973 } 00:13:39.973 } 00:13:39.973 }' 00:13:39.973 23:31:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:39.973 23:31:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:13:39.973 pt2 00:13:39.973 pt3' 00:13:39.973 23:31:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:40.233 23:31:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:40.233 23:31:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:40.233 23:31:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:13:40.233 23:31:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:40.233 23:31:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:40.233 23:31:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:40.233 23:31:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:40.233 23:31:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:40.233 23:31:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:40.233 23:31:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:40.233 23:31:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:40.233 23:31:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:13:40.233 23:31:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:40.233 23:31:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:40.233 23:31:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:40.233 23:31:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:40.233 23:31:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:40.233 23:31:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:40.233 23:31:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:13:40.233 23:31:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:40.233 23:31:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:40.233 23:31:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:40.233 23:31:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:40.233 23:31:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:40.233 23:31:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:40.233 23:31:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:40.233 23:31:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:40.233 23:31:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:13:40.233 23:31:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:40.233 [2024-09-30 23:31:19.983874] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:40.233 23:31:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:40.233 23:31:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 4d30c93b-3198-4631-a7bd-1bba18bbf10f '!=' 4d30c93b-3198-4631-a7bd-1bba18bbf10f ']' 00:13:40.233 23:31:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid5f 00:13:40.233 23:31:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:40.233 23:31:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:13:40.233 23:31:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:13:40.233 23:31:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:40.233 23:31:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:40.233 [2024-09-30 23:31:20.031653] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:13:40.233 23:31:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:40.233 23:31:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:13:40.233 23:31:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:40.233 23:31:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:40.233 23:31:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:40.233 23:31:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:40.233 23:31:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:40.233 23:31:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:40.233 23:31:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:40.233 23:31:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:40.233 23:31:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:40.233 23:31:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:40.233 23:31:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:40.233 23:31:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:40.233 23:31:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:40.233 23:31:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:40.234 23:31:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:40.234 "name": "raid_bdev1", 00:13:40.234 "uuid": "4d30c93b-3198-4631-a7bd-1bba18bbf10f", 00:13:40.234 "strip_size_kb": 64, 00:13:40.234 "state": "online", 00:13:40.234 "raid_level": "raid5f", 00:13:40.234 "superblock": true, 00:13:40.234 "num_base_bdevs": 3, 00:13:40.234 "num_base_bdevs_discovered": 2, 00:13:40.234 "num_base_bdevs_operational": 2, 00:13:40.234 "base_bdevs_list": [ 00:13:40.234 { 00:13:40.234 "name": null, 00:13:40.234 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:40.234 "is_configured": false, 00:13:40.234 "data_offset": 0, 00:13:40.234 "data_size": 63488 00:13:40.234 }, 00:13:40.234 { 00:13:40.234 "name": "pt2", 00:13:40.234 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:40.234 "is_configured": true, 00:13:40.234 "data_offset": 2048, 00:13:40.234 "data_size": 63488 00:13:40.234 }, 00:13:40.234 { 00:13:40.234 "name": "pt3", 00:13:40.234 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:40.234 "is_configured": true, 00:13:40.234 "data_offset": 2048, 00:13:40.234 "data_size": 63488 00:13:40.234 } 00:13:40.234 ] 00:13:40.234 }' 00:13:40.234 23:31:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:40.493 23:31:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:40.753 23:31:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:40.753 23:31:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:40.753 23:31:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:40.753 [2024-09-30 23:31:20.454969] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:40.753 [2024-09-30 23:31:20.455003] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:40.753 [2024-09-30 23:31:20.455061] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:40.753 [2024-09-30 23:31:20.455119] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:40.753 [2024-09-30 23:31:20.455130] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name raid_bdev1, state offline 00:13:40.753 23:31:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:40.753 23:31:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:40.753 23:31:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:13:40.753 23:31:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:40.753 23:31:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:40.753 23:31:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:40.753 23:31:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:13:40.753 23:31:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:13:40.753 23:31:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:13:40.753 23:31:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:13:40.753 23:31:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:13:40.753 23:31:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:40.753 23:31:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:40.753 23:31:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:40.753 23:31:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:13:40.753 23:31:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:13:40.753 23:31:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:13:40.753 23:31:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:40.753 23:31:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:40.753 23:31:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:40.753 23:31:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:13:40.753 23:31:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:13:40.753 23:31:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:13:40.753 23:31:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:13:40.753 23:31:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:40.753 23:31:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:40.753 23:31:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:40.753 [2024-09-30 23:31:20.542814] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:40.753 [2024-09-30 23:31:20.542881] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:40.753 [2024-09-30 23:31:20.542903] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:13:40.753 [2024-09-30 23:31:20.542914] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:40.753 [2024-09-30 23:31:20.545055] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:40.753 [2024-09-30 23:31:20.545095] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:40.753 [2024-09-30 23:31:20.545168] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:13:40.753 [2024-09-30 23:31:20.545204] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:40.753 pt2 00:13:40.753 23:31:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:40.753 23:31:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 2 00:13:40.753 23:31:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:40.753 23:31:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:40.753 23:31:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:40.753 23:31:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:40.753 23:31:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:40.753 23:31:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:40.753 23:31:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:40.753 23:31:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:40.753 23:31:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:40.753 23:31:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:40.753 23:31:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:40.753 23:31:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:40.753 23:31:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:40.753 23:31:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:40.753 23:31:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:40.753 "name": "raid_bdev1", 00:13:40.753 "uuid": "4d30c93b-3198-4631-a7bd-1bba18bbf10f", 00:13:40.753 "strip_size_kb": 64, 00:13:40.753 "state": "configuring", 00:13:40.753 "raid_level": "raid5f", 00:13:40.753 "superblock": true, 00:13:40.753 "num_base_bdevs": 3, 00:13:40.753 "num_base_bdevs_discovered": 1, 00:13:40.753 "num_base_bdevs_operational": 2, 00:13:40.753 "base_bdevs_list": [ 00:13:40.753 { 00:13:40.753 "name": null, 00:13:40.753 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:40.753 "is_configured": false, 00:13:40.753 "data_offset": 2048, 00:13:40.753 "data_size": 63488 00:13:40.753 }, 00:13:40.753 { 00:13:40.753 "name": "pt2", 00:13:40.753 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:40.753 "is_configured": true, 00:13:40.753 "data_offset": 2048, 00:13:40.753 "data_size": 63488 00:13:40.753 }, 00:13:40.753 { 00:13:40.753 "name": null, 00:13:40.753 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:40.753 "is_configured": false, 00:13:40.753 "data_offset": 2048, 00:13:40.753 "data_size": 63488 00:13:40.753 } 00:13:40.753 ] 00:13:40.753 }' 00:13:40.753 23:31:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:40.753 23:31:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:41.323 23:31:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:13:41.323 23:31:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:13:41.323 23:31:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@519 -- # i=2 00:13:41.323 23:31:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:13:41.323 23:31:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:41.323 23:31:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:41.323 [2024-09-30 23:31:21.002025] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:13:41.323 [2024-09-30 23:31:21.002078] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:41.323 [2024-09-30 23:31:21.002102] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:13:41.323 [2024-09-30 23:31:21.002114] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:41.323 [2024-09-30 23:31:21.002491] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:41.323 [2024-09-30 23:31:21.002510] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:13:41.323 [2024-09-30 23:31:21.002578] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:13:41.323 [2024-09-30 23:31:21.002608] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:13:41.323 [2024-09-30 23:31:21.002703] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:13:41.323 [2024-09-30 23:31:21.002712] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:13:41.323 [2024-09-30 23:31:21.002957] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:13:41.323 [2024-09-30 23:31:21.003452] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:13:41.323 [2024-09-30 23:31:21.003480] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006d00 00:13:41.323 [2024-09-30 23:31:21.003723] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:41.323 pt3 00:13:41.323 23:31:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:41.323 23:31:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:13:41.323 23:31:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:41.323 23:31:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:41.323 23:31:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:41.323 23:31:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:41.323 23:31:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:41.323 23:31:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:41.323 23:31:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:41.323 23:31:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:41.323 23:31:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:41.323 23:31:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:41.323 23:31:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:41.323 23:31:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:41.323 23:31:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:41.323 23:31:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:41.323 23:31:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:41.323 "name": "raid_bdev1", 00:13:41.323 "uuid": "4d30c93b-3198-4631-a7bd-1bba18bbf10f", 00:13:41.323 "strip_size_kb": 64, 00:13:41.323 "state": "online", 00:13:41.323 "raid_level": "raid5f", 00:13:41.323 "superblock": true, 00:13:41.323 "num_base_bdevs": 3, 00:13:41.323 "num_base_bdevs_discovered": 2, 00:13:41.323 "num_base_bdevs_operational": 2, 00:13:41.323 "base_bdevs_list": [ 00:13:41.323 { 00:13:41.323 "name": null, 00:13:41.323 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:41.323 "is_configured": false, 00:13:41.323 "data_offset": 2048, 00:13:41.323 "data_size": 63488 00:13:41.323 }, 00:13:41.323 { 00:13:41.323 "name": "pt2", 00:13:41.323 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:41.323 "is_configured": true, 00:13:41.323 "data_offset": 2048, 00:13:41.323 "data_size": 63488 00:13:41.323 }, 00:13:41.323 { 00:13:41.323 "name": "pt3", 00:13:41.323 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:41.323 "is_configured": true, 00:13:41.323 "data_offset": 2048, 00:13:41.323 "data_size": 63488 00:13:41.323 } 00:13:41.323 ] 00:13:41.323 }' 00:13:41.323 23:31:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:41.323 23:31:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:41.583 23:31:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:41.583 23:31:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:41.583 23:31:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:41.583 [2024-09-30 23:31:21.409301] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:41.583 [2024-09-30 23:31:21.409337] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:41.583 [2024-09-30 23:31:21.409402] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:41.583 [2024-09-30 23:31:21.409456] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:41.583 [2024-09-30 23:31:21.409469] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name raid_bdev1, state offline 00:13:41.583 23:31:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:41.583 23:31:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:41.583 23:31:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:41.583 23:31:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:41.584 23:31:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:13:41.584 23:31:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:41.844 23:31:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:13:41.844 23:31:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:13:41.844 23:31:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 3 -gt 2 ']' 00:13:41.844 23:31:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@534 -- # i=2 00:13:41.844 23:31:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt3 00:13:41.844 23:31:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:41.844 23:31:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:41.844 23:31:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:41.844 23:31:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:13:41.844 23:31:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:41.844 23:31:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:41.844 [2024-09-30 23:31:21.473191] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:13:41.844 [2024-09-30 23:31:21.473248] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:41.844 [2024-09-30 23:31:21.473265] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:13:41.844 [2024-09-30 23:31:21.473278] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:41.844 [2024-09-30 23:31:21.475518] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:41.844 [2024-09-30 23:31:21.475558] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:13:41.844 [2024-09-30 23:31:21.475625] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:13:41.844 [2024-09-30 23:31:21.475666] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:13:41.844 [2024-09-30 23:31:21.475759] bdev_raid.c:3675:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:13:41.844 [2024-09-30 23:31:21.475784] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:41.844 [2024-09-30 23:31:21.475814] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007080 name raid_bdev1, state configuring 00:13:41.844 [2024-09-30 23:31:21.475880] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:41.844 pt1 00:13:41.844 23:31:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:41.844 23:31:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 3 -gt 2 ']' 00:13:41.844 23:31:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 2 00:13:41.844 23:31:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:41.844 23:31:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:41.844 23:31:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:41.844 23:31:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:41.844 23:31:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:41.844 23:31:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:41.844 23:31:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:41.844 23:31:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:41.844 23:31:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:41.844 23:31:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:41.844 23:31:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:41.844 23:31:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:41.844 23:31:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:41.844 23:31:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:41.844 23:31:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:41.844 "name": "raid_bdev1", 00:13:41.844 "uuid": "4d30c93b-3198-4631-a7bd-1bba18bbf10f", 00:13:41.844 "strip_size_kb": 64, 00:13:41.844 "state": "configuring", 00:13:41.844 "raid_level": "raid5f", 00:13:41.844 "superblock": true, 00:13:41.844 "num_base_bdevs": 3, 00:13:41.844 "num_base_bdevs_discovered": 1, 00:13:41.844 "num_base_bdevs_operational": 2, 00:13:41.844 "base_bdevs_list": [ 00:13:41.844 { 00:13:41.844 "name": null, 00:13:41.844 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:41.844 "is_configured": false, 00:13:41.844 "data_offset": 2048, 00:13:41.844 "data_size": 63488 00:13:41.844 }, 00:13:41.844 { 00:13:41.844 "name": "pt2", 00:13:41.844 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:41.844 "is_configured": true, 00:13:41.844 "data_offset": 2048, 00:13:41.844 "data_size": 63488 00:13:41.844 }, 00:13:41.844 { 00:13:41.844 "name": null, 00:13:41.844 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:41.844 "is_configured": false, 00:13:41.844 "data_offset": 2048, 00:13:41.844 "data_size": 63488 00:13:41.844 } 00:13:41.844 ] 00:13:41.844 }' 00:13:41.844 23:31:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:41.844 23:31:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:42.104 23:31:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:13:42.104 23:31:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:13:42.104 23:31:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:42.104 23:31:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:42.104 23:31:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:42.104 23:31:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:13:42.104 23:31:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:13:42.104 23:31:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:42.104 23:31:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:42.104 [2024-09-30 23:31:21.900495] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:13:42.104 [2024-09-30 23:31:21.900558] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:42.104 [2024-09-30 23:31:21.900579] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:13:42.104 [2024-09-30 23:31:21.900593] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:42.104 [2024-09-30 23:31:21.901031] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:42.104 [2024-09-30 23:31:21.901062] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:13:42.104 [2024-09-30 23:31:21.901137] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:13:42.104 [2024-09-30 23:31:21.901164] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:13:42.104 [2024-09-30 23:31:21.901255] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007400 00:13:42.104 [2024-09-30 23:31:21.901269] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:13:42.104 [2024-09-30 23:31:21.901497] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:13:42.104 [2024-09-30 23:31:21.902005] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007400 00:13:42.104 [2024-09-30 23:31:21.902020] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007400 00:13:42.104 [2024-09-30 23:31:21.902192] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:42.104 pt3 00:13:42.104 23:31:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:42.104 23:31:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:13:42.104 23:31:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:42.104 23:31:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:42.104 23:31:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:42.104 23:31:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:42.104 23:31:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:42.104 23:31:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:42.104 23:31:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:42.104 23:31:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:42.104 23:31:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:42.104 23:31:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:42.104 23:31:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:42.104 23:31:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:42.104 23:31:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:42.104 23:31:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:42.364 23:31:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:42.364 "name": "raid_bdev1", 00:13:42.364 "uuid": "4d30c93b-3198-4631-a7bd-1bba18bbf10f", 00:13:42.364 "strip_size_kb": 64, 00:13:42.364 "state": "online", 00:13:42.364 "raid_level": "raid5f", 00:13:42.364 "superblock": true, 00:13:42.364 "num_base_bdevs": 3, 00:13:42.364 "num_base_bdevs_discovered": 2, 00:13:42.364 "num_base_bdevs_operational": 2, 00:13:42.364 "base_bdevs_list": [ 00:13:42.364 { 00:13:42.364 "name": null, 00:13:42.364 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:42.364 "is_configured": false, 00:13:42.364 "data_offset": 2048, 00:13:42.364 "data_size": 63488 00:13:42.364 }, 00:13:42.364 { 00:13:42.364 "name": "pt2", 00:13:42.364 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:42.364 "is_configured": true, 00:13:42.364 "data_offset": 2048, 00:13:42.364 "data_size": 63488 00:13:42.364 }, 00:13:42.364 { 00:13:42.364 "name": "pt3", 00:13:42.364 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:42.364 "is_configured": true, 00:13:42.364 "data_offset": 2048, 00:13:42.364 "data_size": 63488 00:13:42.364 } 00:13:42.364 ] 00:13:42.364 }' 00:13:42.364 23:31:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:42.364 23:31:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:42.624 23:31:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:13:42.624 23:31:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:42.624 23:31:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:42.624 23:31:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:13:42.624 23:31:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:42.624 23:31:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:13:42.624 23:31:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:13:42.624 23:31:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:42.624 23:31:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:42.624 23:31:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:42.624 [2024-09-30 23:31:22.351997] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:42.624 23:31:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:42.624 23:31:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 4d30c93b-3198-4631-a7bd-1bba18bbf10f '!=' 4d30c93b-3198-4631-a7bd-1bba18bbf10f ']' 00:13:42.624 23:31:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 91707 00:13:42.624 23:31:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 91707 ']' 00:13:42.624 23:31:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@954 -- # kill -0 91707 00:13:42.624 23:31:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@955 -- # uname 00:13:42.624 23:31:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:42.624 23:31:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 91707 00:13:42.624 23:31:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:42.624 23:31:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:42.624 killing process with pid 91707 00:13:42.624 23:31:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 91707' 00:13:42.624 23:31:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@969 -- # kill 91707 00:13:42.624 [2024-09-30 23:31:22.409598] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:42.624 [2024-09-30 23:31:22.409684] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:42.624 [2024-09-30 23:31:22.409747] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:42.624 [2024-09-30 23:31:22.409766] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name raid_bdev1, state offline 00:13:42.624 23:31:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@974 -- # wait 91707 00:13:42.624 [2024-09-30 23:31:22.443636] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:42.884 23:31:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:13:42.884 00:13:42.884 real 0m6.262s 00:13:42.884 user 0m10.386s 00:13:42.884 sys 0m1.370s 00:13:42.884 23:31:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:42.884 23:31:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:42.884 ************************************ 00:13:42.884 END TEST raid5f_superblock_test 00:13:42.884 ************************************ 00:13:43.143 23:31:22 bdev_raid -- bdev/bdev_raid.sh@989 -- # '[' true = true ']' 00:13:43.143 23:31:22 bdev_raid -- bdev/bdev_raid.sh@990 -- # run_test raid5f_rebuild_test raid_rebuild_test raid5f 3 false false true 00:13:43.143 23:31:22 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:13:43.143 23:31:22 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:43.143 23:31:22 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:43.143 ************************************ 00:13:43.143 START TEST raid5f_rebuild_test 00:13:43.143 ************************************ 00:13:43.143 23:31:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid5f 3 false false true 00:13:43.143 23:31:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:13:43.143 23:31:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=3 00:13:43.143 23:31:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:13:43.143 23:31:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:13:43.143 23:31:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:13:43.143 23:31:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:13:43.143 23:31:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:43.143 23:31:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:13:43.143 23:31:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:43.143 23:31:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:43.143 23:31:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:13:43.143 23:31:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:43.143 23:31:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:43.143 23:31:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:13:43.143 23:31:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:43.143 23:31:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:43.143 23:31:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:13:43.143 23:31:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:13:43.143 23:31:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:13:43.143 23:31:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:13:43.143 23:31:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:13:43.143 23:31:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:13:43.143 23:31:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:13:43.143 23:31:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:13:43.143 23:31:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:13:43.143 23:31:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:13:43.143 23:31:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:13:43.143 23:31:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:13:43.143 23:31:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=92134 00:13:43.143 23:31:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:13:43.143 23:31:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 92134 00:13:43.143 23:31:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@831 -- # '[' -z 92134 ']' 00:13:43.143 23:31:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:43.143 23:31:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:43.143 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:43.143 23:31:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:43.143 23:31:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:43.143 23:31:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:43.143 I/O size of 3145728 is greater than zero copy threshold (65536). 00:13:43.143 Zero copy mechanism will not be used. 00:13:43.143 [2024-09-30 23:31:22.873130] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:13:43.143 [2024-09-30 23:31:22.873248] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92134 ] 00:13:43.402 [2024-09-30 23:31:23.034174] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:43.402 [2024-09-30 23:31:23.083043] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:13:43.402 [2024-09-30 23:31:23.127491] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:43.402 [2024-09-30 23:31:23.127531] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:43.971 23:31:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:43.971 23:31:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@864 -- # return 0 00:13:43.971 23:31:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:43.971 23:31:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:43.971 23:31:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:43.971 23:31:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:43.971 BaseBdev1_malloc 00:13:43.971 23:31:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:43.971 23:31:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:13:43.971 23:31:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:43.971 23:31:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:43.971 [2024-09-30 23:31:23.702747] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:13:43.971 [2024-09-30 23:31:23.702828] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:43.971 [2024-09-30 23:31:23.702856] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:13:43.971 [2024-09-30 23:31:23.702897] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:43.971 [2024-09-30 23:31:23.704991] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:43.971 [2024-09-30 23:31:23.705029] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:43.971 BaseBdev1 00:13:43.971 23:31:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:43.971 23:31:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:43.971 23:31:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:43.971 23:31:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:43.971 23:31:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:43.971 BaseBdev2_malloc 00:13:43.971 23:31:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:43.971 23:31:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:13:43.971 23:31:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:43.971 23:31:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:43.971 [2024-09-30 23:31:23.752344] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:13:43.971 [2024-09-30 23:31:23.752449] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:43.971 [2024-09-30 23:31:23.752501] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:13:43.971 [2024-09-30 23:31:23.752530] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:43.971 [2024-09-30 23:31:23.756805] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:43.971 [2024-09-30 23:31:23.756875] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:43.971 BaseBdev2 00:13:43.971 23:31:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:43.971 23:31:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:43.971 23:31:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:13:43.971 23:31:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:43.971 23:31:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:43.971 BaseBdev3_malloc 00:13:43.971 23:31:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:43.971 23:31:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:13:43.971 23:31:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:43.971 23:31:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:43.971 [2024-09-30 23:31:23.783210] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:13:43.971 [2024-09-30 23:31:23.783260] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:43.971 [2024-09-30 23:31:23.783284] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:13:43.971 [2024-09-30 23:31:23.783295] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:43.971 [2024-09-30 23:31:23.785361] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:43.971 [2024-09-30 23:31:23.785401] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:13:43.971 BaseBdev3 00:13:43.971 23:31:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:43.972 23:31:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:13:43.972 23:31:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:43.972 23:31:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:43.972 spare_malloc 00:13:43.972 23:31:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:43.972 23:31:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:13:43.972 23:31:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:43.972 23:31:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:43.972 spare_delay 00:13:43.972 23:31:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:43.972 23:31:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:43.972 23:31:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:43.972 23:31:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:44.231 [2024-09-30 23:31:23.824089] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:44.231 [2024-09-30 23:31:23.824137] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:44.231 [2024-09-30 23:31:23.824164] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:13:44.231 [2024-09-30 23:31:23.824175] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:44.231 [2024-09-30 23:31:23.826284] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:44.231 [2024-09-30 23:31:23.826334] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:44.231 spare 00:13:44.231 23:31:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.231 23:31:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 00:13:44.231 23:31:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.231 23:31:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:44.231 [2024-09-30 23:31:23.840141] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:44.231 [2024-09-30 23:31:23.841927] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:44.231 [2024-09-30 23:31:23.842001] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:44.232 [2024-09-30 23:31:23.842082] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:13:44.232 [2024-09-30 23:31:23.842102] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:13:44.232 [2024-09-30 23:31:23.842369] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:13:44.232 [2024-09-30 23:31:23.842793] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:13:44.232 [2024-09-30 23:31:23.842820] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:13:44.232 [2024-09-30 23:31:23.842972] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:44.232 23:31:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.232 23:31:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:13:44.232 23:31:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:44.232 23:31:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:44.232 23:31:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:44.232 23:31:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:44.232 23:31:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:44.232 23:31:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:44.232 23:31:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:44.232 23:31:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:44.232 23:31:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:44.232 23:31:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:44.232 23:31:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:44.232 23:31:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.232 23:31:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:44.232 23:31:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.232 23:31:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:44.232 "name": "raid_bdev1", 00:13:44.232 "uuid": "5f42ff28-a580-4378-a39b-8b05e41f8233", 00:13:44.232 "strip_size_kb": 64, 00:13:44.232 "state": "online", 00:13:44.232 "raid_level": "raid5f", 00:13:44.232 "superblock": false, 00:13:44.232 "num_base_bdevs": 3, 00:13:44.232 "num_base_bdevs_discovered": 3, 00:13:44.232 "num_base_bdevs_operational": 3, 00:13:44.232 "base_bdevs_list": [ 00:13:44.232 { 00:13:44.232 "name": "BaseBdev1", 00:13:44.232 "uuid": "f797c86a-fb10-582b-aba2-e25da425f680", 00:13:44.232 "is_configured": true, 00:13:44.232 "data_offset": 0, 00:13:44.232 "data_size": 65536 00:13:44.232 }, 00:13:44.232 { 00:13:44.232 "name": "BaseBdev2", 00:13:44.232 "uuid": "551a30a4-212e-530f-bd8e-e436ba6fcf51", 00:13:44.232 "is_configured": true, 00:13:44.232 "data_offset": 0, 00:13:44.232 "data_size": 65536 00:13:44.232 }, 00:13:44.232 { 00:13:44.232 "name": "BaseBdev3", 00:13:44.232 "uuid": "e2008b6d-b57c-53f8-b975-0cc6368bb8df", 00:13:44.232 "is_configured": true, 00:13:44.232 "data_offset": 0, 00:13:44.232 "data_size": 65536 00:13:44.232 } 00:13:44.232 ] 00:13:44.232 }' 00:13:44.232 23:31:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:44.232 23:31:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:44.492 23:31:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:13:44.492 23:31:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:44.492 23:31:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.492 23:31:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:44.492 [2024-09-30 23:31:24.271809] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:44.492 23:31:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.492 23:31:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=131072 00:13:44.492 23:31:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:44.492 23:31:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:13:44.492 23:31:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.492 23:31:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:44.492 23:31:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.492 23:31:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:13:44.492 23:31:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:13:44.492 23:31:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:13:44.751 23:31:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:13:44.751 23:31:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:13:44.751 23:31:24 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:44.751 23:31:24 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:13:44.751 23:31:24 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:44.751 23:31:24 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:13:44.751 23:31:24 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:44.751 23:31:24 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:13:44.752 23:31:24 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:44.752 23:31:24 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:44.752 23:31:24 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:13:44.752 [2024-09-30 23:31:24.519404] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:13:44.752 /dev/nbd0 00:13:44.752 23:31:24 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:44.752 23:31:24 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:44.752 23:31:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:13:44.752 23:31:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:13:44.752 23:31:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:44.752 23:31:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:44.752 23:31:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:13:44.752 23:31:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # break 00:13:44.752 23:31:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:44.752 23:31:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:44.752 23:31:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:44.752 1+0 records in 00:13:44.752 1+0 records out 00:13:44.752 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000379033 s, 10.8 MB/s 00:13:44.752 23:31:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:45.012 23:31:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:13:45.012 23:31:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:45.012 23:31:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:45.012 23:31:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:13:45.012 23:31:24 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:45.012 23:31:24 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:45.012 23:31:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:13:45.012 23:31:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@630 -- # write_unit_size=256 00:13:45.012 23:31:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@631 -- # echo 128 00:13:45.012 23:31:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=131072 count=512 oflag=direct 00:13:45.271 512+0 records in 00:13:45.271 512+0 records out 00:13:45.271 67108864 bytes (67 MB, 64 MiB) copied, 0.321111 s, 209 MB/s 00:13:45.271 23:31:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:13:45.271 23:31:24 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:45.271 23:31:24 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:13:45.271 23:31:24 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:45.271 23:31:24 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:13:45.271 23:31:24 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:45.271 23:31:24 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:45.531 [2024-09-30 23:31:25.143094] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:45.531 23:31:25 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:45.531 23:31:25 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:45.531 23:31:25 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:45.531 23:31:25 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:45.531 23:31:25 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:45.531 23:31:25 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:45.531 23:31:25 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:13:45.531 23:31:25 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:13:45.531 23:31:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:13:45.531 23:31:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.531 23:31:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:45.531 [2024-09-30 23:31:25.175121] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:45.531 23:31:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.531 23:31:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:13:45.531 23:31:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:45.531 23:31:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:45.531 23:31:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:45.531 23:31:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:45.531 23:31:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:45.531 23:31:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:45.531 23:31:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:45.531 23:31:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:45.531 23:31:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:45.531 23:31:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:45.531 23:31:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:45.531 23:31:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.531 23:31:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:45.531 23:31:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.531 23:31:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:45.531 "name": "raid_bdev1", 00:13:45.531 "uuid": "5f42ff28-a580-4378-a39b-8b05e41f8233", 00:13:45.531 "strip_size_kb": 64, 00:13:45.531 "state": "online", 00:13:45.531 "raid_level": "raid5f", 00:13:45.531 "superblock": false, 00:13:45.531 "num_base_bdevs": 3, 00:13:45.531 "num_base_bdevs_discovered": 2, 00:13:45.531 "num_base_bdevs_operational": 2, 00:13:45.531 "base_bdevs_list": [ 00:13:45.531 { 00:13:45.531 "name": null, 00:13:45.531 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:45.531 "is_configured": false, 00:13:45.531 "data_offset": 0, 00:13:45.531 "data_size": 65536 00:13:45.531 }, 00:13:45.531 { 00:13:45.531 "name": "BaseBdev2", 00:13:45.531 "uuid": "551a30a4-212e-530f-bd8e-e436ba6fcf51", 00:13:45.531 "is_configured": true, 00:13:45.531 "data_offset": 0, 00:13:45.531 "data_size": 65536 00:13:45.531 }, 00:13:45.531 { 00:13:45.531 "name": "BaseBdev3", 00:13:45.531 "uuid": "e2008b6d-b57c-53f8-b975-0cc6368bb8df", 00:13:45.531 "is_configured": true, 00:13:45.531 "data_offset": 0, 00:13:45.531 "data_size": 65536 00:13:45.531 } 00:13:45.531 ] 00:13:45.531 }' 00:13:45.531 23:31:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:45.531 23:31:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:45.791 23:31:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:45.791 23:31:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.791 23:31:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:45.791 [2024-09-30 23:31:25.598369] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:45.791 [2024-09-30 23:31:25.604883] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b4e0 00:13:45.791 23:31:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.791 23:31:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:13:45.791 [2024-09-30 23:31:25.607376] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:47.171 23:31:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:47.171 23:31:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:47.171 23:31:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:47.171 23:31:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:47.171 23:31:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:47.171 23:31:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:47.171 23:31:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:47.171 23:31:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.171 23:31:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:47.171 23:31:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:47.171 23:31:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:47.171 "name": "raid_bdev1", 00:13:47.171 "uuid": "5f42ff28-a580-4378-a39b-8b05e41f8233", 00:13:47.171 "strip_size_kb": 64, 00:13:47.171 "state": "online", 00:13:47.171 "raid_level": "raid5f", 00:13:47.171 "superblock": false, 00:13:47.171 "num_base_bdevs": 3, 00:13:47.171 "num_base_bdevs_discovered": 3, 00:13:47.171 "num_base_bdevs_operational": 3, 00:13:47.171 "process": { 00:13:47.171 "type": "rebuild", 00:13:47.171 "target": "spare", 00:13:47.171 "progress": { 00:13:47.171 "blocks": 20480, 00:13:47.171 "percent": 15 00:13:47.171 } 00:13:47.171 }, 00:13:47.171 "base_bdevs_list": [ 00:13:47.171 { 00:13:47.171 "name": "spare", 00:13:47.171 "uuid": "7af740a5-548b-5c09-9f6e-f4495d091608", 00:13:47.171 "is_configured": true, 00:13:47.171 "data_offset": 0, 00:13:47.171 "data_size": 65536 00:13:47.171 }, 00:13:47.171 { 00:13:47.171 "name": "BaseBdev2", 00:13:47.171 "uuid": "551a30a4-212e-530f-bd8e-e436ba6fcf51", 00:13:47.171 "is_configured": true, 00:13:47.171 "data_offset": 0, 00:13:47.171 "data_size": 65536 00:13:47.171 }, 00:13:47.171 { 00:13:47.171 "name": "BaseBdev3", 00:13:47.171 "uuid": "e2008b6d-b57c-53f8-b975-0cc6368bb8df", 00:13:47.171 "is_configured": true, 00:13:47.171 "data_offset": 0, 00:13:47.171 "data_size": 65536 00:13:47.171 } 00:13:47.171 ] 00:13:47.171 }' 00:13:47.172 23:31:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:47.172 23:31:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:47.172 23:31:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:47.172 23:31:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:47.172 23:31:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:13:47.172 23:31:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.172 23:31:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:47.172 [2024-09-30 23:31:26.771655] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:47.172 [2024-09-30 23:31:26.815857] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:47.172 [2024-09-30 23:31:26.815960] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:47.172 [2024-09-30 23:31:26.815980] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:47.172 [2024-09-30 23:31:26.815992] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:47.172 23:31:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:47.172 23:31:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:13:47.172 23:31:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:47.172 23:31:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:47.172 23:31:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:47.172 23:31:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:47.172 23:31:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:47.172 23:31:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:47.172 23:31:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:47.172 23:31:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:47.172 23:31:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:47.172 23:31:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:47.172 23:31:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:47.172 23:31:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.172 23:31:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:47.172 23:31:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:47.172 23:31:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:47.172 "name": "raid_bdev1", 00:13:47.172 "uuid": "5f42ff28-a580-4378-a39b-8b05e41f8233", 00:13:47.172 "strip_size_kb": 64, 00:13:47.172 "state": "online", 00:13:47.172 "raid_level": "raid5f", 00:13:47.172 "superblock": false, 00:13:47.172 "num_base_bdevs": 3, 00:13:47.172 "num_base_bdevs_discovered": 2, 00:13:47.172 "num_base_bdevs_operational": 2, 00:13:47.172 "base_bdevs_list": [ 00:13:47.172 { 00:13:47.172 "name": null, 00:13:47.172 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:47.172 "is_configured": false, 00:13:47.172 "data_offset": 0, 00:13:47.172 "data_size": 65536 00:13:47.172 }, 00:13:47.172 { 00:13:47.172 "name": "BaseBdev2", 00:13:47.172 "uuid": "551a30a4-212e-530f-bd8e-e436ba6fcf51", 00:13:47.172 "is_configured": true, 00:13:47.172 "data_offset": 0, 00:13:47.172 "data_size": 65536 00:13:47.172 }, 00:13:47.172 { 00:13:47.172 "name": "BaseBdev3", 00:13:47.172 "uuid": "e2008b6d-b57c-53f8-b975-0cc6368bb8df", 00:13:47.172 "is_configured": true, 00:13:47.172 "data_offset": 0, 00:13:47.172 "data_size": 65536 00:13:47.172 } 00:13:47.172 ] 00:13:47.172 }' 00:13:47.172 23:31:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:47.172 23:31:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:47.431 23:31:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:47.431 23:31:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:47.431 23:31:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:47.431 23:31:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:47.431 23:31:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:47.431 23:31:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:47.431 23:31:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.431 23:31:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:47.431 23:31:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:47.690 23:31:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:47.690 23:31:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:47.690 "name": "raid_bdev1", 00:13:47.690 "uuid": "5f42ff28-a580-4378-a39b-8b05e41f8233", 00:13:47.690 "strip_size_kb": 64, 00:13:47.690 "state": "online", 00:13:47.690 "raid_level": "raid5f", 00:13:47.690 "superblock": false, 00:13:47.690 "num_base_bdevs": 3, 00:13:47.690 "num_base_bdevs_discovered": 2, 00:13:47.690 "num_base_bdevs_operational": 2, 00:13:47.690 "base_bdevs_list": [ 00:13:47.690 { 00:13:47.690 "name": null, 00:13:47.690 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:47.690 "is_configured": false, 00:13:47.690 "data_offset": 0, 00:13:47.690 "data_size": 65536 00:13:47.690 }, 00:13:47.690 { 00:13:47.690 "name": "BaseBdev2", 00:13:47.690 "uuid": "551a30a4-212e-530f-bd8e-e436ba6fcf51", 00:13:47.690 "is_configured": true, 00:13:47.690 "data_offset": 0, 00:13:47.690 "data_size": 65536 00:13:47.690 }, 00:13:47.690 { 00:13:47.690 "name": "BaseBdev3", 00:13:47.690 "uuid": "e2008b6d-b57c-53f8-b975-0cc6368bb8df", 00:13:47.690 "is_configured": true, 00:13:47.690 "data_offset": 0, 00:13:47.690 "data_size": 65536 00:13:47.690 } 00:13:47.690 ] 00:13:47.690 }' 00:13:47.690 23:31:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:47.690 23:31:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:47.690 23:31:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:47.690 23:31:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:47.690 23:31:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:47.690 23:31:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.691 23:31:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:47.691 [2024-09-30 23:31:27.391830] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:47.691 [2024-09-30 23:31:27.396431] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b5b0 00:13:47.691 23:31:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:47.691 23:31:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:13:47.691 [2024-09-30 23:31:27.398831] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:48.627 23:31:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:48.627 23:31:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:48.627 23:31:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:48.627 23:31:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:48.627 23:31:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:48.627 23:31:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:48.627 23:31:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:48.627 23:31:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:48.627 23:31:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:48.628 23:31:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:48.628 23:31:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:48.628 "name": "raid_bdev1", 00:13:48.628 "uuid": "5f42ff28-a580-4378-a39b-8b05e41f8233", 00:13:48.628 "strip_size_kb": 64, 00:13:48.628 "state": "online", 00:13:48.628 "raid_level": "raid5f", 00:13:48.628 "superblock": false, 00:13:48.628 "num_base_bdevs": 3, 00:13:48.628 "num_base_bdevs_discovered": 3, 00:13:48.628 "num_base_bdevs_operational": 3, 00:13:48.628 "process": { 00:13:48.628 "type": "rebuild", 00:13:48.628 "target": "spare", 00:13:48.628 "progress": { 00:13:48.628 "blocks": 20480, 00:13:48.628 "percent": 15 00:13:48.628 } 00:13:48.628 }, 00:13:48.628 "base_bdevs_list": [ 00:13:48.628 { 00:13:48.628 "name": "spare", 00:13:48.628 "uuid": "7af740a5-548b-5c09-9f6e-f4495d091608", 00:13:48.628 "is_configured": true, 00:13:48.628 "data_offset": 0, 00:13:48.628 "data_size": 65536 00:13:48.628 }, 00:13:48.628 { 00:13:48.628 "name": "BaseBdev2", 00:13:48.628 "uuid": "551a30a4-212e-530f-bd8e-e436ba6fcf51", 00:13:48.628 "is_configured": true, 00:13:48.628 "data_offset": 0, 00:13:48.628 "data_size": 65536 00:13:48.628 }, 00:13:48.628 { 00:13:48.628 "name": "BaseBdev3", 00:13:48.628 "uuid": "e2008b6d-b57c-53f8-b975-0cc6368bb8df", 00:13:48.628 "is_configured": true, 00:13:48.628 "data_offset": 0, 00:13:48.628 "data_size": 65536 00:13:48.628 } 00:13:48.628 ] 00:13:48.628 }' 00:13:48.628 23:31:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:48.887 23:31:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:48.887 23:31:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:48.887 23:31:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:48.887 23:31:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:13:48.887 23:31:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=3 00:13:48.887 23:31:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:13:48.887 23:31:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=449 00:13:48.887 23:31:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:48.887 23:31:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:48.887 23:31:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:48.887 23:31:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:48.887 23:31:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:48.887 23:31:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:48.887 23:31:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:48.887 23:31:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:48.887 23:31:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:48.887 23:31:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:48.887 23:31:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:48.887 23:31:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:48.887 "name": "raid_bdev1", 00:13:48.887 "uuid": "5f42ff28-a580-4378-a39b-8b05e41f8233", 00:13:48.887 "strip_size_kb": 64, 00:13:48.887 "state": "online", 00:13:48.887 "raid_level": "raid5f", 00:13:48.887 "superblock": false, 00:13:48.887 "num_base_bdevs": 3, 00:13:48.887 "num_base_bdevs_discovered": 3, 00:13:48.887 "num_base_bdevs_operational": 3, 00:13:48.887 "process": { 00:13:48.887 "type": "rebuild", 00:13:48.887 "target": "spare", 00:13:48.887 "progress": { 00:13:48.887 "blocks": 22528, 00:13:48.887 "percent": 17 00:13:48.887 } 00:13:48.887 }, 00:13:48.887 "base_bdevs_list": [ 00:13:48.887 { 00:13:48.888 "name": "spare", 00:13:48.888 "uuid": "7af740a5-548b-5c09-9f6e-f4495d091608", 00:13:48.888 "is_configured": true, 00:13:48.888 "data_offset": 0, 00:13:48.888 "data_size": 65536 00:13:48.888 }, 00:13:48.888 { 00:13:48.888 "name": "BaseBdev2", 00:13:48.888 "uuid": "551a30a4-212e-530f-bd8e-e436ba6fcf51", 00:13:48.888 "is_configured": true, 00:13:48.888 "data_offset": 0, 00:13:48.888 "data_size": 65536 00:13:48.888 }, 00:13:48.888 { 00:13:48.888 "name": "BaseBdev3", 00:13:48.888 "uuid": "e2008b6d-b57c-53f8-b975-0cc6368bb8df", 00:13:48.888 "is_configured": true, 00:13:48.888 "data_offset": 0, 00:13:48.888 "data_size": 65536 00:13:48.888 } 00:13:48.888 ] 00:13:48.888 }' 00:13:48.888 23:31:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:48.888 23:31:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:48.888 23:31:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:48.888 23:31:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:48.888 23:31:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:49.824 23:31:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:49.824 23:31:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:49.824 23:31:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:49.824 23:31:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:49.824 23:31:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:49.824 23:31:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:49.824 23:31:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:49.824 23:31:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:49.824 23:31:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.824 23:31:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:49.824 23:31:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:50.083 23:31:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:50.083 "name": "raid_bdev1", 00:13:50.083 "uuid": "5f42ff28-a580-4378-a39b-8b05e41f8233", 00:13:50.083 "strip_size_kb": 64, 00:13:50.083 "state": "online", 00:13:50.083 "raid_level": "raid5f", 00:13:50.083 "superblock": false, 00:13:50.083 "num_base_bdevs": 3, 00:13:50.083 "num_base_bdevs_discovered": 3, 00:13:50.083 "num_base_bdevs_operational": 3, 00:13:50.083 "process": { 00:13:50.083 "type": "rebuild", 00:13:50.083 "target": "spare", 00:13:50.083 "progress": { 00:13:50.083 "blocks": 45056, 00:13:50.083 "percent": 34 00:13:50.083 } 00:13:50.083 }, 00:13:50.083 "base_bdevs_list": [ 00:13:50.083 { 00:13:50.083 "name": "spare", 00:13:50.083 "uuid": "7af740a5-548b-5c09-9f6e-f4495d091608", 00:13:50.083 "is_configured": true, 00:13:50.083 "data_offset": 0, 00:13:50.083 "data_size": 65536 00:13:50.083 }, 00:13:50.083 { 00:13:50.083 "name": "BaseBdev2", 00:13:50.083 "uuid": "551a30a4-212e-530f-bd8e-e436ba6fcf51", 00:13:50.083 "is_configured": true, 00:13:50.083 "data_offset": 0, 00:13:50.083 "data_size": 65536 00:13:50.083 }, 00:13:50.083 { 00:13:50.083 "name": "BaseBdev3", 00:13:50.083 "uuid": "e2008b6d-b57c-53f8-b975-0cc6368bb8df", 00:13:50.083 "is_configured": true, 00:13:50.083 "data_offset": 0, 00:13:50.083 "data_size": 65536 00:13:50.083 } 00:13:50.083 ] 00:13:50.083 }' 00:13:50.083 23:31:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:50.083 23:31:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:50.083 23:31:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:50.083 23:31:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:50.083 23:31:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:51.018 23:31:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:51.018 23:31:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:51.018 23:31:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:51.018 23:31:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:51.018 23:31:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:51.018 23:31:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:51.018 23:31:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:51.018 23:31:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:51.018 23:31:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:51.018 23:31:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:51.018 23:31:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:51.018 23:31:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:51.018 "name": "raid_bdev1", 00:13:51.018 "uuid": "5f42ff28-a580-4378-a39b-8b05e41f8233", 00:13:51.018 "strip_size_kb": 64, 00:13:51.018 "state": "online", 00:13:51.018 "raid_level": "raid5f", 00:13:51.018 "superblock": false, 00:13:51.018 "num_base_bdevs": 3, 00:13:51.018 "num_base_bdevs_discovered": 3, 00:13:51.018 "num_base_bdevs_operational": 3, 00:13:51.018 "process": { 00:13:51.018 "type": "rebuild", 00:13:51.018 "target": "spare", 00:13:51.018 "progress": { 00:13:51.018 "blocks": 67584, 00:13:51.018 "percent": 51 00:13:51.018 } 00:13:51.018 }, 00:13:51.018 "base_bdevs_list": [ 00:13:51.018 { 00:13:51.018 "name": "spare", 00:13:51.018 "uuid": "7af740a5-548b-5c09-9f6e-f4495d091608", 00:13:51.018 "is_configured": true, 00:13:51.018 "data_offset": 0, 00:13:51.018 "data_size": 65536 00:13:51.018 }, 00:13:51.018 { 00:13:51.018 "name": "BaseBdev2", 00:13:51.018 "uuid": "551a30a4-212e-530f-bd8e-e436ba6fcf51", 00:13:51.018 "is_configured": true, 00:13:51.018 "data_offset": 0, 00:13:51.018 "data_size": 65536 00:13:51.018 }, 00:13:51.018 { 00:13:51.018 "name": "BaseBdev3", 00:13:51.018 "uuid": "e2008b6d-b57c-53f8-b975-0cc6368bb8df", 00:13:51.018 "is_configured": true, 00:13:51.018 "data_offset": 0, 00:13:51.018 "data_size": 65536 00:13:51.018 } 00:13:51.018 ] 00:13:51.018 }' 00:13:51.018 23:31:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:51.018 23:31:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:51.277 23:31:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:51.277 23:31:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:51.277 23:31:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:52.213 23:31:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:52.213 23:31:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:52.213 23:31:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:52.213 23:31:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:52.213 23:31:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:52.213 23:31:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:52.213 23:31:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:52.213 23:31:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:52.213 23:31:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:52.213 23:31:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:52.213 23:31:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:52.213 23:31:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:52.213 "name": "raid_bdev1", 00:13:52.213 "uuid": "5f42ff28-a580-4378-a39b-8b05e41f8233", 00:13:52.213 "strip_size_kb": 64, 00:13:52.213 "state": "online", 00:13:52.213 "raid_level": "raid5f", 00:13:52.213 "superblock": false, 00:13:52.213 "num_base_bdevs": 3, 00:13:52.213 "num_base_bdevs_discovered": 3, 00:13:52.213 "num_base_bdevs_operational": 3, 00:13:52.213 "process": { 00:13:52.213 "type": "rebuild", 00:13:52.213 "target": "spare", 00:13:52.213 "progress": { 00:13:52.213 "blocks": 92160, 00:13:52.213 "percent": 70 00:13:52.213 } 00:13:52.213 }, 00:13:52.213 "base_bdevs_list": [ 00:13:52.213 { 00:13:52.213 "name": "spare", 00:13:52.213 "uuid": "7af740a5-548b-5c09-9f6e-f4495d091608", 00:13:52.213 "is_configured": true, 00:13:52.213 "data_offset": 0, 00:13:52.213 "data_size": 65536 00:13:52.213 }, 00:13:52.213 { 00:13:52.213 "name": "BaseBdev2", 00:13:52.213 "uuid": "551a30a4-212e-530f-bd8e-e436ba6fcf51", 00:13:52.213 "is_configured": true, 00:13:52.213 "data_offset": 0, 00:13:52.213 "data_size": 65536 00:13:52.213 }, 00:13:52.213 { 00:13:52.213 "name": "BaseBdev3", 00:13:52.213 "uuid": "e2008b6d-b57c-53f8-b975-0cc6368bb8df", 00:13:52.213 "is_configured": true, 00:13:52.213 "data_offset": 0, 00:13:52.213 "data_size": 65536 00:13:52.213 } 00:13:52.213 ] 00:13:52.213 }' 00:13:52.213 23:31:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:52.213 23:31:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:52.213 23:31:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:52.213 23:31:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:52.213 23:31:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:53.609 23:31:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:53.609 23:31:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:53.609 23:31:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:53.609 23:31:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:53.609 23:31:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:53.609 23:31:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:53.609 23:31:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:53.609 23:31:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:53.609 23:31:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:53.609 23:31:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:53.609 23:31:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:53.609 23:31:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:53.609 "name": "raid_bdev1", 00:13:53.609 "uuid": "5f42ff28-a580-4378-a39b-8b05e41f8233", 00:13:53.609 "strip_size_kb": 64, 00:13:53.609 "state": "online", 00:13:53.609 "raid_level": "raid5f", 00:13:53.609 "superblock": false, 00:13:53.610 "num_base_bdevs": 3, 00:13:53.610 "num_base_bdevs_discovered": 3, 00:13:53.610 "num_base_bdevs_operational": 3, 00:13:53.610 "process": { 00:13:53.610 "type": "rebuild", 00:13:53.610 "target": "spare", 00:13:53.610 "progress": { 00:13:53.610 "blocks": 114688, 00:13:53.610 "percent": 87 00:13:53.610 } 00:13:53.610 }, 00:13:53.610 "base_bdevs_list": [ 00:13:53.610 { 00:13:53.610 "name": "spare", 00:13:53.610 "uuid": "7af740a5-548b-5c09-9f6e-f4495d091608", 00:13:53.610 "is_configured": true, 00:13:53.610 "data_offset": 0, 00:13:53.610 "data_size": 65536 00:13:53.610 }, 00:13:53.610 { 00:13:53.610 "name": "BaseBdev2", 00:13:53.610 "uuid": "551a30a4-212e-530f-bd8e-e436ba6fcf51", 00:13:53.610 "is_configured": true, 00:13:53.610 "data_offset": 0, 00:13:53.610 "data_size": 65536 00:13:53.610 }, 00:13:53.610 { 00:13:53.610 "name": "BaseBdev3", 00:13:53.610 "uuid": "e2008b6d-b57c-53f8-b975-0cc6368bb8df", 00:13:53.610 "is_configured": true, 00:13:53.610 "data_offset": 0, 00:13:53.610 "data_size": 65536 00:13:53.610 } 00:13:53.610 ] 00:13:53.610 }' 00:13:53.610 23:31:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:53.610 23:31:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:53.610 23:31:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:53.610 23:31:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:53.610 23:31:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:54.179 [2024-09-30 23:31:33.839283] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:13:54.179 [2024-09-30 23:31:33.839360] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:13:54.179 [2024-09-30 23:31:33.839410] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:54.438 23:31:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:54.438 23:31:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:54.438 23:31:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:54.438 23:31:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:54.438 23:31:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:54.438 23:31:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:54.438 23:31:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:54.438 23:31:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:54.438 23:31:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:54.438 23:31:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:54.438 23:31:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:54.438 23:31:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:54.438 "name": "raid_bdev1", 00:13:54.438 "uuid": "5f42ff28-a580-4378-a39b-8b05e41f8233", 00:13:54.438 "strip_size_kb": 64, 00:13:54.438 "state": "online", 00:13:54.438 "raid_level": "raid5f", 00:13:54.438 "superblock": false, 00:13:54.438 "num_base_bdevs": 3, 00:13:54.438 "num_base_bdevs_discovered": 3, 00:13:54.438 "num_base_bdevs_operational": 3, 00:13:54.438 "base_bdevs_list": [ 00:13:54.438 { 00:13:54.438 "name": "spare", 00:13:54.438 "uuid": "7af740a5-548b-5c09-9f6e-f4495d091608", 00:13:54.438 "is_configured": true, 00:13:54.438 "data_offset": 0, 00:13:54.438 "data_size": 65536 00:13:54.438 }, 00:13:54.438 { 00:13:54.438 "name": "BaseBdev2", 00:13:54.438 "uuid": "551a30a4-212e-530f-bd8e-e436ba6fcf51", 00:13:54.438 "is_configured": true, 00:13:54.438 "data_offset": 0, 00:13:54.438 "data_size": 65536 00:13:54.438 }, 00:13:54.438 { 00:13:54.438 "name": "BaseBdev3", 00:13:54.438 "uuid": "e2008b6d-b57c-53f8-b975-0cc6368bb8df", 00:13:54.438 "is_configured": true, 00:13:54.438 "data_offset": 0, 00:13:54.438 "data_size": 65536 00:13:54.438 } 00:13:54.438 ] 00:13:54.438 }' 00:13:54.438 23:31:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:54.438 23:31:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:13:54.438 23:31:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:54.698 23:31:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:13:54.698 23:31:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:13:54.698 23:31:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:54.698 23:31:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:54.698 23:31:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:54.698 23:31:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:54.698 23:31:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:54.698 23:31:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:54.698 23:31:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:54.698 23:31:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:54.698 23:31:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:54.698 23:31:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:54.698 23:31:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:54.698 "name": "raid_bdev1", 00:13:54.698 "uuid": "5f42ff28-a580-4378-a39b-8b05e41f8233", 00:13:54.698 "strip_size_kb": 64, 00:13:54.698 "state": "online", 00:13:54.698 "raid_level": "raid5f", 00:13:54.698 "superblock": false, 00:13:54.698 "num_base_bdevs": 3, 00:13:54.698 "num_base_bdevs_discovered": 3, 00:13:54.698 "num_base_bdevs_operational": 3, 00:13:54.698 "base_bdevs_list": [ 00:13:54.698 { 00:13:54.698 "name": "spare", 00:13:54.698 "uuid": "7af740a5-548b-5c09-9f6e-f4495d091608", 00:13:54.698 "is_configured": true, 00:13:54.698 "data_offset": 0, 00:13:54.698 "data_size": 65536 00:13:54.698 }, 00:13:54.698 { 00:13:54.698 "name": "BaseBdev2", 00:13:54.698 "uuid": "551a30a4-212e-530f-bd8e-e436ba6fcf51", 00:13:54.698 "is_configured": true, 00:13:54.698 "data_offset": 0, 00:13:54.698 "data_size": 65536 00:13:54.698 }, 00:13:54.698 { 00:13:54.698 "name": "BaseBdev3", 00:13:54.698 "uuid": "e2008b6d-b57c-53f8-b975-0cc6368bb8df", 00:13:54.698 "is_configured": true, 00:13:54.698 "data_offset": 0, 00:13:54.698 "data_size": 65536 00:13:54.698 } 00:13:54.698 ] 00:13:54.698 }' 00:13:54.698 23:31:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:54.698 23:31:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:54.698 23:31:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:54.698 23:31:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:54.698 23:31:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:13:54.698 23:31:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:54.698 23:31:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:54.698 23:31:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:54.698 23:31:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:54.698 23:31:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:54.698 23:31:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:54.698 23:31:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:54.698 23:31:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:54.698 23:31:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:54.698 23:31:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:54.698 23:31:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:54.698 23:31:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:54.698 23:31:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:54.698 23:31:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:54.698 23:31:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:54.698 "name": "raid_bdev1", 00:13:54.698 "uuid": "5f42ff28-a580-4378-a39b-8b05e41f8233", 00:13:54.698 "strip_size_kb": 64, 00:13:54.698 "state": "online", 00:13:54.698 "raid_level": "raid5f", 00:13:54.698 "superblock": false, 00:13:54.698 "num_base_bdevs": 3, 00:13:54.698 "num_base_bdevs_discovered": 3, 00:13:54.698 "num_base_bdevs_operational": 3, 00:13:54.698 "base_bdevs_list": [ 00:13:54.698 { 00:13:54.698 "name": "spare", 00:13:54.698 "uuid": "7af740a5-548b-5c09-9f6e-f4495d091608", 00:13:54.698 "is_configured": true, 00:13:54.698 "data_offset": 0, 00:13:54.698 "data_size": 65536 00:13:54.698 }, 00:13:54.698 { 00:13:54.698 "name": "BaseBdev2", 00:13:54.698 "uuid": "551a30a4-212e-530f-bd8e-e436ba6fcf51", 00:13:54.698 "is_configured": true, 00:13:54.698 "data_offset": 0, 00:13:54.698 "data_size": 65536 00:13:54.698 }, 00:13:54.698 { 00:13:54.698 "name": "BaseBdev3", 00:13:54.698 "uuid": "e2008b6d-b57c-53f8-b975-0cc6368bb8df", 00:13:54.698 "is_configured": true, 00:13:54.698 "data_offset": 0, 00:13:54.698 "data_size": 65536 00:13:54.698 } 00:13:54.698 ] 00:13:54.698 }' 00:13:54.698 23:31:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:54.698 23:31:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:55.267 23:31:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:55.267 23:31:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.267 23:31:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:55.267 [2024-09-30 23:31:34.862188] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:55.267 [2024-09-30 23:31:34.862236] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:55.267 [2024-09-30 23:31:34.862321] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:55.267 [2024-09-30 23:31:34.862396] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:55.267 [2024-09-30 23:31:34.862405] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:13:55.267 23:31:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.267 23:31:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:55.267 23:31:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.267 23:31:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:13:55.267 23:31:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:55.267 23:31:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.267 23:31:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:13:55.267 23:31:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:13:55.267 23:31:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:13:55.267 23:31:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:13:55.267 23:31:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:55.267 23:31:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:13:55.267 23:31:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:55.267 23:31:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:13:55.267 23:31:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:55.267 23:31:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:13:55.267 23:31:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:55.267 23:31:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:55.267 23:31:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:13:55.267 /dev/nbd0 00:13:55.526 23:31:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:55.526 23:31:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:55.526 23:31:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:13:55.526 23:31:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:13:55.526 23:31:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:55.526 23:31:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:55.526 23:31:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:13:55.526 23:31:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # break 00:13:55.526 23:31:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:55.526 23:31:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:55.526 23:31:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:55.526 1+0 records in 00:13:55.526 1+0 records out 00:13:55.526 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000415825 s, 9.9 MB/s 00:13:55.526 23:31:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:55.526 23:31:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:13:55.526 23:31:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:55.526 23:31:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:55.526 23:31:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:13:55.527 23:31:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:55.527 23:31:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:55.527 23:31:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:13:55.527 /dev/nbd1 00:13:55.786 23:31:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:13:55.786 23:31:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:13:55.786 23:31:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:13:55.786 23:31:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:13:55.786 23:31:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:55.786 23:31:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:55.786 23:31:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:13:55.786 23:31:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # break 00:13:55.786 23:31:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:55.786 23:31:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:55.786 23:31:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:55.786 1+0 records in 00:13:55.786 1+0 records out 00:13:55.786 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000616199 s, 6.6 MB/s 00:13:55.786 23:31:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:55.786 23:31:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:13:55.786 23:31:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:55.786 23:31:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:55.786 23:31:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:13:55.786 23:31:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:55.786 23:31:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:55.786 23:31:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:13:55.786 23:31:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:13:55.786 23:31:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:55.786 23:31:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:13:55.786 23:31:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:55.786 23:31:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:13:55.786 23:31:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:55.786 23:31:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:56.046 23:31:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:56.046 23:31:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:56.046 23:31:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:56.046 23:31:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:56.046 23:31:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:56.046 23:31:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:56.046 23:31:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:13:56.046 23:31:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:13:56.046 23:31:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:56.046 23:31:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:13:56.306 23:31:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:56.306 23:31:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:56.306 23:31:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:56.306 23:31:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:56.306 23:31:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:56.306 23:31:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:56.306 23:31:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:13:56.306 23:31:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:13:56.306 23:31:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:13:56.306 23:31:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 92134 00:13:56.306 23:31:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@950 -- # '[' -z 92134 ']' 00:13:56.306 23:31:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@954 -- # kill -0 92134 00:13:56.306 23:31:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@955 -- # uname 00:13:56.306 23:31:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:56.306 23:31:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 92134 00:13:56.306 killing process with pid 92134 00:13:56.306 Received shutdown signal, test time was about 60.000000 seconds 00:13:56.306 00:13:56.306 Latency(us) 00:13:56.306 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:56.306 =================================================================================================================== 00:13:56.306 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:13:56.306 23:31:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:56.306 23:31:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:56.306 23:31:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 92134' 00:13:56.306 23:31:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@969 -- # kill 92134 00:13:56.306 [2024-09-30 23:31:35.985659] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:56.306 23:31:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@974 -- # wait 92134 00:13:56.306 [2024-09-30 23:31:36.059284] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:56.567 23:31:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:13:56.567 00:13:56.567 real 0m13.643s 00:13:56.567 user 0m16.919s 00:13:56.567 sys 0m1.961s 00:13:56.567 23:31:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:56.827 ************************************ 00:13:56.827 END TEST raid5f_rebuild_test 00:13:56.827 ************************************ 00:13:56.827 23:31:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:56.827 23:31:36 bdev_raid -- bdev/bdev_raid.sh@991 -- # run_test raid5f_rebuild_test_sb raid_rebuild_test raid5f 3 true false true 00:13:56.827 23:31:36 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:13:56.827 23:31:36 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:56.827 23:31:36 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:56.827 ************************************ 00:13:56.827 START TEST raid5f_rebuild_test_sb 00:13:56.827 ************************************ 00:13:56.827 23:31:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid5f 3 true false true 00:13:56.827 23:31:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:13:56.827 23:31:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=3 00:13:56.827 23:31:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:13:56.827 23:31:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:13:56.827 23:31:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:13:56.827 23:31:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:13:56.827 23:31:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:56.827 23:31:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:13:56.827 23:31:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:56.827 23:31:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:56.827 23:31:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:13:56.827 23:31:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:56.827 23:31:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:56.827 23:31:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:13:56.827 23:31:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:56.827 23:31:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:56.827 23:31:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:13:56.827 23:31:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:13:56.827 23:31:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:13:56.827 23:31:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:13:56.827 23:31:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:13:56.827 23:31:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:13:56.827 23:31:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:13:56.827 23:31:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:13:56.827 23:31:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:13:56.827 23:31:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:13:56.827 23:31:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:13:56.827 23:31:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:13:56.827 23:31:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:13:56.827 23:31:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=92559 00:13:56.827 23:31:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:13:56.827 23:31:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 92559 00:13:56.827 23:31:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@831 -- # '[' -z 92559 ']' 00:13:56.827 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:56.827 23:31:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:56.827 23:31:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:56.827 23:31:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:56.827 23:31:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:56.827 23:31:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:56.827 I/O size of 3145728 is greater than zero copy threshold (65536). 00:13:56.827 Zero copy mechanism will not be used. 00:13:56.827 [2024-09-30 23:31:36.596643] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:13:56.827 [2024-09-30 23:31:36.596766] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92559 ] 00:13:57.087 [2024-09-30 23:31:36.756114] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:57.087 [2024-09-30 23:31:36.826460] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:13:57.087 [2024-09-30 23:31:36.901527] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:57.087 [2024-09-30 23:31:36.901663] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:57.657 23:31:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:57.657 23:31:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@864 -- # return 0 00:13:57.657 23:31:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:57.657 23:31:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:57.657 23:31:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:57.657 23:31:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:57.657 BaseBdev1_malloc 00:13:57.657 23:31:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:57.657 23:31:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:13:57.657 23:31:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:57.657 23:31:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:57.657 [2024-09-30 23:31:37.447425] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:13:57.657 [2024-09-30 23:31:37.447540] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:57.657 [2024-09-30 23:31:37.447587] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:13:57.657 [2024-09-30 23:31:37.447635] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:57.657 [2024-09-30 23:31:37.449920] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:57.657 [2024-09-30 23:31:37.449985] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:57.657 BaseBdev1 00:13:57.657 23:31:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:57.657 23:31:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:57.657 23:31:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:57.657 23:31:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:57.657 23:31:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:57.657 BaseBdev2_malloc 00:13:57.657 23:31:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:57.657 23:31:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:13:57.657 23:31:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:57.657 23:31:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:57.657 [2024-09-30 23:31:37.497027] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:13:57.657 [2024-09-30 23:31:37.497209] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:57.657 [2024-09-30 23:31:37.497298] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:13:57.657 [2024-09-30 23:31:37.497376] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:57.657 [2024-09-30 23:31:37.502127] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:57.657 [2024-09-30 23:31:37.502264] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:57.657 BaseBdev2 00:13:57.657 23:31:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:57.657 23:31:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:57.657 23:31:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:13:57.657 23:31:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:57.657 23:31:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:57.918 BaseBdev3_malloc 00:13:57.918 23:31:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:57.918 23:31:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:13:57.918 23:31:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:57.918 23:31:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:57.918 [2024-09-30 23:31:37.535249] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:13:57.918 [2024-09-30 23:31:37.535299] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:57.918 [2024-09-30 23:31:37.535336] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:13:57.918 [2024-09-30 23:31:37.535345] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:57.918 [2024-09-30 23:31:37.537601] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:57.918 [2024-09-30 23:31:37.537638] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:13:57.918 BaseBdev3 00:13:57.918 23:31:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:57.918 23:31:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:13:57.918 23:31:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:57.918 23:31:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:57.918 spare_malloc 00:13:57.918 23:31:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:57.918 23:31:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:13:57.918 23:31:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:57.918 23:31:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:57.918 spare_delay 00:13:57.918 23:31:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:57.918 23:31:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:57.918 23:31:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:57.918 23:31:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:57.918 [2024-09-30 23:31:37.581658] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:57.918 [2024-09-30 23:31:37.581708] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:57.918 [2024-09-30 23:31:37.581733] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:13:57.918 [2024-09-30 23:31:37.581742] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:57.918 [2024-09-30 23:31:37.584082] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:57.918 [2024-09-30 23:31:37.584119] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:57.918 spare 00:13:57.918 23:31:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:57.918 23:31:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 00:13:57.918 23:31:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:57.918 23:31:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:57.918 [2024-09-30 23:31:37.593715] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:57.918 [2024-09-30 23:31:37.595809] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:57.918 [2024-09-30 23:31:37.595938] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:57.918 [2024-09-30 23:31:37.596102] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:13:57.918 [2024-09-30 23:31:37.596116] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:13:57.918 [2024-09-30 23:31:37.596360] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:13:57.918 [2024-09-30 23:31:37.596787] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:13:57.918 [2024-09-30 23:31:37.596797] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:13:57.918 [2024-09-30 23:31:37.596942] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:57.918 23:31:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:57.918 23:31:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:13:57.918 23:31:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:57.918 23:31:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:57.918 23:31:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:57.918 23:31:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:57.918 23:31:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:57.918 23:31:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:57.918 23:31:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:57.918 23:31:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:57.918 23:31:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:57.918 23:31:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:57.918 23:31:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:57.918 23:31:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:57.918 23:31:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:57.918 23:31:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:57.918 23:31:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:57.918 "name": "raid_bdev1", 00:13:57.918 "uuid": "5e203c22-de12-4c95-b611-acae9972f6d0", 00:13:57.918 "strip_size_kb": 64, 00:13:57.918 "state": "online", 00:13:57.918 "raid_level": "raid5f", 00:13:57.918 "superblock": true, 00:13:57.918 "num_base_bdevs": 3, 00:13:57.918 "num_base_bdevs_discovered": 3, 00:13:57.918 "num_base_bdevs_operational": 3, 00:13:57.918 "base_bdevs_list": [ 00:13:57.918 { 00:13:57.918 "name": "BaseBdev1", 00:13:57.918 "uuid": "a3eaa147-a3dc-56a8-bef1-4ce1ea1b1a19", 00:13:57.918 "is_configured": true, 00:13:57.918 "data_offset": 2048, 00:13:57.918 "data_size": 63488 00:13:57.918 }, 00:13:57.918 { 00:13:57.918 "name": "BaseBdev2", 00:13:57.918 "uuid": "893e1111-edd6-5061-8cbd-d3c70c7188a4", 00:13:57.918 "is_configured": true, 00:13:57.918 "data_offset": 2048, 00:13:57.918 "data_size": 63488 00:13:57.918 }, 00:13:57.918 { 00:13:57.918 "name": "BaseBdev3", 00:13:57.918 "uuid": "150b2170-8294-5a49-8f24-ba3b809ae5fe", 00:13:57.918 "is_configured": true, 00:13:57.918 "data_offset": 2048, 00:13:57.918 "data_size": 63488 00:13:57.918 } 00:13:57.918 ] 00:13:57.919 }' 00:13:57.919 23:31:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:57.919 23:31:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:58.178 23:31:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:58.178 23:31:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:13:58.178 23:31:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:58.178 23:31:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:58.178 [2024-09-30 23:31:38.030556] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:58.438 23:31:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:58.438 23:31:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=126976 00:13:58.438 23:31:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:58.438 23:31:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:58.438 23:31:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:58.438 23:31:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:13:58.438 23:31:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:58.438 23:31:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:13:58.438 23:31:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:13:58.438 23:31:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:13:58.438 23:31:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:13:58.438 23:31:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:13:58.438 23:31:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:58.438 23:31:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:13:58.438 23:31:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:58.438 23:31:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:13:58.438 23:31:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:58.438 23:31:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:13:58.438 23:31:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:58.438 23:31:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:58.438 23:31:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:13:58.698 [2024-09-30 23:31:38.305978] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:13:58.698 /dev/nbd0 00:13:58.698 23:31:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:58.698 23:31:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:58.698 23:31:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:13:58.698 23:31:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:13:58.698 23:31:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:58.698 23:31:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:58.698 23:31:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:13:58.698 23:31:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:13:58.698 23:31:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:58.698 23:31:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:58.698 23:31:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:58.698 1+0 records in 00:13:58.698 1+0 records out 00:13:58.698 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000580027 s, 7.1 MB/s 00:13:58.698 23:31:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:58.698 23:31:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:13:58.698 23:31:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:58.698 23:31:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:58.698 23:31:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:13:58.698 23:31:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:58.698 23:31:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:58.698 23:31:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:13:58.698 23:31:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@630 -- # write_unit_size=256 00:13:58.698 23:31:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # echo 128 00:13:58.698 23:31:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=131072 count=496 oflag=direct 00:13:58.958 496+0 records in 00:13:58.958 496+0 records out 00:13:58.958 65011712 bytes (65 MB, 62 MiB) copied, 0.305194 s, 213 MB/s 00:13:58.958 23:31:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:13:58.958 23:31:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:58.958 23:31:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:13:58.958 23:31:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:58.958 23:31:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:13:58.958 23:31:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:58.958 23:31:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:59.217 23:31:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:59.217 23:31:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:59.217 23:31:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:59.217 23:31:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:59.217 23:31:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:59.217 [2024-09-30 23:31:38.899790] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:59.217 23:31:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:59.217 23:31:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:13:59.217 23:31:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:13:59.217 23:31:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:13:59.217 23:31:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:59.217 23:31:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:59.217 [2024-09-30 23:31:38.911875] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:59.217 23:31:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:59.217 23:31:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:13:59.217 23:31:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:59.217 23:31:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:59.217 23:31:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:59.217 23:31:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:59.217 23:31:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:59.217 23:31:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:59.217 23:31:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:59.217 23:31:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:59.217 23:31:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:59.217 23:31:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:59.217 23:31:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:59.217 23:31:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:59.217 23:31:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:59.217 23:31:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:59.217 23:31:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:59.217 "name": "raid_bdev1", 00:13:59.217 "uuid": "5e203c22-de12-4c95-b611-acae9972f6d0", 00:13:59.217 "strip_size_kb": 64, 00:13:59.217 "state": "online", 00:13:59.217 "raid_level": "raid5f", 00:13:59.217 "superblock": true, 00:13:59.217 "num_base_bdevs": 3, 00:13:59.217 "num_base_bdevs_discovered": 2, 00:13:59.217 "num_base_bdevs_operational": 2, 00:13:59.217 "base_bdevs_list": [ 00:13:59.217 { 00:13:59.217 "name": null, 00:13:59.217 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:59.217 "is_configured": false, 00:13:59.217 "data_offset": 0, 00:13:59.217 "data_size": 63488 00:13:59.217 }, 00:13:59.217 { 00:13:59.217 "name": "BaseBdev2", 00:13:59.217 "uuid": "893e1111-edd6-5061-8cbd-d3c70c7188a4", 00:13:59.217 "is_configured": true, 00:13:59.217 "data_offset": 2048, 00:13:59.217 "data_size": 63488 00:13:59.217 }, 00:13:59.217 { 00:13:59.217 "name": "BaseBdev3", 00:13:59.217 "uuid": "150b2170-8294-5a49-8f24-ba3b809ae5fe", 00:13:59.217 "is_configured": true, 00:13:59.217 "data_offset": 2048, 00:13:59.217 "data_size": 63488 00:13:59.217 } 00:13:59.217 ] 00:13:59.217 }' 00:13:59.217 23:31:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:59.217 23:31:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:59.787 23:31:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:59.787 23:31:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:59.787 23:31:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:59.787 [2024-09-30 23:31:39.391074] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:59.787 [2024-09-30 23:31:39.397654] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000028de0 00:13:59.787 23:31:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:59.787 23:31:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:13:59.787 [2024-09-30 23:31:39.399865] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:00.725 23:31:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:00.725 23:31:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:00.725 23:31:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:00.725 23:31:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:00.725 23:31:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:00.725 23:31:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:00.725 23:31:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:00.725 23:31:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:00.725 23:31:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:00.725 23:31:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:00.725 23:31:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:00.725 "name": "raid_bdev1", 00:14:00.725 "uuid": "5e203c22-de12-4c95-b611-acae9972f6d0", 00:14:00.725 "strip_size_kb": 64, 00:14:00.725 "state": "online", 00:14:00.725 "raid_level": "raid5f", 00:14:00.725 "superblock": true, 00:14:00.725 "num_base_bdevs": 3, 00:14:00.725 "num_base_bdevs_discovered": 3, 00:14:00.725 "num_base_bdevs_operational": 3, 00:14:00.725 "process": { 00:14:00.725 "type": "rebuild", 00:14:00.725 "target": "spare", 00:14:00.725 "progress": { 00:14:00.725 "blocks": 20480, 00:14:00.725 "percent": 16 00:14:00.725 } 00:14:00.725 }, 00:14:00.725 "base_bdevs_list": [ 00:14:00.725 { 00:14:00.725 "name": "spare", 00:14:00.725 "uuid": "dbe41670-4efa-5ac6-b6db-0248d49e52df", 00:14:00.725 "is_configured": true, 00:14:00.725 "data_offset": 2048, 00:14:00.725 "data_size": 63488 00:14:00.725 }, 00:14:00.725 { 00:14:00.725 "name": "BaseBdev2", 00:14:00.725 "uuid": "893e1111-edd6-5061-8cbd-d3c70c7188a4", 00:14:00.725 "is_configured": true, 00:14:00.725 "data_offset": 2048, 00:14:00.725 "data_size": 63488 00:14:00.725 }, 00:14:00.725 { 00:14:00.725 "name": "BaseBdev3", 00:14:00.725 "uuid": "150b2170-8294-5a49-8f24-ba3b809ae5fe", 00:14:00.725 "is_configured": true, 00:14:00.725 "data_offset": 2048, 00:14:00.725 "data_size": 63488 00:14:00.725 } 00:14:00.725 ] 00:14:00.725 }' 00:14:00.725 23:31:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:00.725 23:31:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:00.725 23:31:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:00.725 23:31:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:00.725 23:31:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:14:00.725 23:31:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:00.725 23:31:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:00.725 [2024-09-30 23:31:40.547657] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:00.985 [2024-09-30 23:31:40.608355] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:00.985 [2024-09-30 23:31:40.608478] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:00.985 [2024-09-30 23:31:40.608520] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:00.985 [2024-09-30 23:31:40.608557] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:00.985 23:31:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:00.985 23:31:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:14:00.985 23:31:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:00.985 23:31:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:00.985 23:31:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:00.985 23:31:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:00.985 23:31:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:00.985 23:31:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:00.985 23:31:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:00.985 23:31:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:00.985 23:31:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:00.985 23:31:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:00.985 23:31:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:00.985 23:31:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:00.985 23:31:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:00.985 23:31:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:00.985 23:31:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:00.985 "name": "raid_bdev1", 00:14:00.985 "uuid": "5e203c22-de12-4c95-b611-acae9972f6d0", 00:14:00.985 "strip_size_kb": 64, 00:14:00.985 "state": "online", 00:14:00.985 "raid_level": "raid5f", 00:14:00.985 "superblock": true, 00:14:00.985 "num_base_bdevs": 3, 00:14:00.985 "num_base_bdevs_discovered": 2, 00:14:00.985 "num_base_bdevs_operational": 2, 00:14:00.985 "base_bdevs_list": [ 00:14:00.985 { 00:14:00.985 "name": null, 00:14:00.985 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:00.985 "is_configured": false, 00:14:00.985 "data_offset": 0, 00:14:00.985 "data_size": 63488 00:14:00.985 }, 00:14:00.985 { 00:14:00.985 "name": "BaseBdev2", 00:14:00.985 "uuid": "893e1111-edd6-5061-8cbd-d3c70c7188a4", 00:14:00.985 "is_configured": true, 00:14:00.985 "data_offset": 2048, 00:14:00.985 "data_size": 63488 00:14:00.985 }, 00:14:00.985 { 00:14:00.985 "name": "BaseBdev3", 00:14:00.985 "uuid": "150b2170-8294-5a49-8f24-ba3b809ae5fe", 00:14:00.985 "is_configured": true, 00:14:00.985 "data_offset": 2048, 00:14:00.985 "data_size": 63488 00:14:00.985 } 00:14:00.985 ] 00:14:00.985 }' 00:14:00.985 23:31:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:00.985 23:31:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:01.245 23:31:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:01.245 23:31:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:01.245 23:31:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:01.245 23:31:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:01.245 23:31:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:01.245 23:31:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:01.245 23:31:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:01.245 23:31:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:01.245 23:31:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:01.245 23:31:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:01.504 23:31:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:01.504 "name": "raid_bdev1", 00:14:01.504 "uuid": "5e203c22-de12-4c95-b611-acae9972f6d0", 00:14:01.504 "strip_size_kb": 64, 00:14:01.504 "state": "online", 00:14:01.504 "raid_level": "raid5f", 00:14:01.504 "superblock": true, 00:14:01.504 "num_base_bdevs": 3, 00:14:01.504 "num_base_bdevs_discovered": 2, 00:14:01.504 "num_base_bdevs_operational": 2, 00:14:01.504 "base_bdevs_list": [ 00:14:01.504 { 00:14:01.504 "name": null, 00:14:01.504 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:01.504 "is_configured": false, 00:14:01.504 "data_offset": 0, 00:14:01.504 "data_size": 63488 00:14:01.504 }, 00:14:01.504 { 00:14:01.504 "name": "BaseBdev2", 00:14:01.504 "uuid": "893e1111-edd6-5061-8cbd-d3c70c7188a4", 00:14:01.504 "is_configured": true, 00:14:01.504 "data_offset": 2048, 00:14:01.504 "data_size": 63488 00:14:01.504 }, 00:14:01.504 { 00:14:01.504 "name": "BaseBdev3", 00:14:01.504 "uuid": "150b2170-8294-5a49-8f24-ba3b809ae5fe", 00:14:01.504 "is_configured": true, 00:14:01.505 "data_offset": 2048, 00:14:01.505 "data_size": 63488 00:14:01.505 } 00:14:01.505 ] 00:14:01.505 }' 00:14:01.505 23:31:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:01.505 23:31:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:01.505 23:31:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:01.505 23:31:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:01.505 23:31:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:01.505 23:31:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:01.505 23:31:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:01.505 [2024-09-30 23:31:41.192452] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:01.505 [2024-09-30 23:31:41.196644] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000028eb0 00:14:01.505 23:31:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:01.505 23:31:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:14:01.505 [2024-09-30 23:31:41.198687] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:02.443 23:31:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:02.443 23:31:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:02.443 23:31:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:02.443 23:31:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:02.443 23:31:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:02.443 23:31:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:02.443 23:31:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:02.443 23:31:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:02.443 23:31:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:02.443 23:31:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:02.443 23:31:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:02.443 "name": "raid_bdev1", 00:14:02.443 "uuid": "5e203c22-de12-4c95-b611-acae9972f6d0", 00:14:02.443 "strip_size_kb": 64, 00:14:02.443 "state": "online", 00:14:02.443 "raid_level": "raid5f", 00:14:02.443 "superblock": true, 00:14:02.443 "num_base_bdevs": 3, 00:14:02.443 "num_base_bdevs_discovered": 3, 00:14:02.443 "num_base_bdevs_operational": 3, 00:14:02.443 "process": { 00:14:02.443 "type": "rebuild", 00:14:02.443 "target": "spare", 00:14:02.443 "progress": { 00:14:02.443 "blocks": 20480, 00:14:02.443 "percent": 16 00:14:02.443 } 00:14:02.443 }, 00:14:02.443 "base_bdevs_list": [ 00:14:02.443 { 00:14:02.443 "name": "spare", 00:14:02.443 "uuid": "dbe41670-4efa-5ac6-b6db-0248d49e52df", 00:14:02.443 "is_configured": true, 00:14:02.443 "data_offset": 2048, 00:14:02.443 "data_size": 63488 00:14:02.443 }, 00:14:02.443 { 00:14:02.443 "name": "BaseBdev2", 00:14:02.443 "uuid": "893e1111-edd6-5061-8cbd-d3c70c7188a4", 00:14:02.443 "is_configured": true, 00:14:02.443 "data_offset": 2048, 00:14:02.443 "data_size": 63488 00:14:02.443 }, 00:14:02.443 { 00:14:02.443 "name": "BaseBdev3", 00:14:02.443 "uuid": "150b2170-8294-5a49-8f24-ba3b809ae5fe", 00:14:02.443 "is_configured": true, 00:14:02.443 "data_offset": 2048, 00:14:02.443 "data_size": 63488 00:14:02.443 } 00:14:02.443 ] 00:14:02.443 }' 00:14:02.443 23:31:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:02.703 23:31:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:02.703 23:31:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:02.703 23:31:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:02.703 23:31:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:14:02.703 23:31:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:14:02.703 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:14:02.703 23:31:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=3 00:14:02.703 23:31:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:14:02.703 23:31:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=463 00:14:02.703 23:31:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:02.703 23:31:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:02.703 23:31:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:02.703 23:31:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:02.703 23:31:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:02.703 23:31:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:02.703 23:31:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:02.703 23:31:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:02.703 23:31:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:02.703 23:31:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:02.703 23:31:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:02.703 23:31:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:02.703 "name": "raid_bdev1", 00:14:02.703 "uuid": "5e203c22-de12-4c95-b611-acae9972f6d0", 00:14:02.703 "strip_size_kb": 64, 00:14:02.703 "state": "online", 00:14:02.703 "raid_level": "raid5f", 00:14:02.703 "superblock": true, 00:14:02.703 "num_base_bdevs": 3, 00:14:02.703 "num_base_bdevs_discovered": 3, 00:14:02.703 "num_base_bdevs_operational": 3, 00:14:02.703 "process": { 00:14:02.703 "type": "rebuild", 00:14:02.703 "target": "spare", 00:14:02.703 "progress": { 00:14:02.703 "blocks": 22528, 00:14:02.703 "percent": 17 00:14:02.703 } 00:14:02.703 }, 00:14:02.703 "base_bdevs_list": [ 00:14:02.703 { 00:14:02.703 "name": "spare", 00:14:02.703 "uuid": "dbe41670-4efa-5ac6-b6db-0248d49e52df", 00:14:02.703 "is_configured": true, 00:14:02.703 "data_offset": 2048, 00:14:02.703 "data_size": 63488 00:14:02.703 }, 00:14:02.703 { 00:14:02.703 "name": "BaseBdev2", 00:14:02.703 "uuid": "893e1111-edd6-5061-8cbd-d3c70c7188a4", 00:14:02.703 "is_configured": true, 00:14:02.703 "data_offset": 2048, 00:14:02.703 "data_size": 63488 00:14:02.703 }, 00:14:02.703 { 00:14:02.703 "name": "BaseBdev3", 00:14:02.703 "uuid": "150b2170-8294-5a49-8f24-ba3b809ae5fe", 00:14:02.703 "is_configured": true, 00:14:02.703 "data_offset": 2048, 00:14:02.703 "data_size": 63488 00:14:02.703 } 00:14:02.703 ] 00:14:02.703 }' 00:14:02.703 23:31:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:02.703 23:31:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:02.703 23:31:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:02.703 23:31:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:02.703 23:31:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:04.083 23:31:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:04.083 23:31:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:04.083 23:31:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:04.083 23:31:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:04.083 23:31:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:04.083 23:31:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:04.083 23:31:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:04.083 23:31:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:04.083 23:31:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:04.083 23:31:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:04.083 23:31:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:04.083 23:31:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:04.083 "name": "raid_bdev1", 00:14:04.083 "uuid": "5e203c22-de12-4c95-b611-acae9972f6d0", 00:14:04.083 "strip_size_kb": 64, 00:14:04.083 "state": "online", 00:14:04.083 "raid_level": "raid5f", 00:14:04.083 "superblock": true, 00:14:04.083 "num_base_bdevs": 3, 00:14:04.083 "num_base_bdevs_discovered": 3, 00:14:04.083 "num_base_bdevs_operational": 3, 00:14:04.083 "process": { 00:14:04.083 "type": "rebuild", 00:14:04.083 "target": "spare", 00:14:04.083 "progress": { 00:14:04.083 "blocks": 45056, 00:14:04.083 "percent": 35 00:14:04.083 } 00:14:04.083 }, 00:14:04.084 "base_bdevs_list": [ 00:14:04.084 { 00:14:04.084 "name": "spare", 00:14:04.084 "uuid": "dbe41670-4efa-5ac6-b6db-0248d49e52df", 00:14:04.084 "is_configured": true, 00:14:04.084 "data_offset": 2048, 00:14:04.084 "data_size": 63488 00:14:04.084 }, 00:14:04.084 { 00:14:04.084 "name": "BaseBdev2", 00:14:04.084 "uuid": "893e1111-edd6-5061-8cbd-d3c70c7188a4", 00:14:04.084 "is_configured": true, 00:14:04.084 "data_offset": 2048, 00:14:04.084 "data_size": 63488 00:14:04.084 }, 00:14:04.084 { 00:14:04.084 "name": "BaseBdev3", 00:14:04.084 "uuid": "150b2170-8294-5a49-8f24-ba3b809ae5fe", 00:14:04.084 "is_configured": true, 00:14:04.084 "data_offset": 2048, 00:14:04.084 "data_size": 63488 00:14:04.084 } 00:14:04.084 ] 00:14:04.084 }' 00:14:04.084 23:31:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:04.084 23:31:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:04.084 23:31:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:04.084 23:31:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:04.084 23:31:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:05.022 23:31:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:05.022 23:31:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:05.022 23:31:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:05.022 23:31:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:05.022 23:31:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:05.022 23:31:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:05.022 23:31:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:05.022 23:31:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:05.022 23:31:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:05.022 23:31:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:05.022 23:31:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:05.022 23:31:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:05.022 "name": "raid_bdev1", 00:14:05.022 "uuid": "5e203c22-de12-4c95-b611-acae9972f6d0", 00:14:05.022 "strip_size_kb": 64, 00:14:05.022 "state": "online", 00:14:05.022 "raid_level": "raid5f", 00:14:05.022 "superblock": true, 00:14:05.022 "num_base_bdevs": 3, 00:14:05.022 "num_base_bdevs_discovered": 3, 00:14:05.022 "num_base_bdevs_operational": 3, 00:14:05.022 "process": { 00:14:05.022 "type": "rebuild", 00:14:05.022 "target": "spare", 00:14:05.022 "progress": { 00:14:05.022 "blocks": 69632, 00:14:05.022 "percent": 54 00:14:05.022 } 00:14:05.022 }, 00:14:05.022 "base_bdevs_list": [ 00:14:05.022 { 00:14:05.022 "name": "spare", 00:14:05.022 "uuid": "dbe41670-4efa-5ac6-b6db-0248d49e52df", 00:14:05.022 "is_configured": true, 00:14:05.022 "data_offset": 2048, 00:14:05.022 "data_size": 63488 00:14:05.022 }, 00:14:05.022 { 00:14:05.022 "name": "BaseBdev2", 00:14:05.022 "uuid": "893e1111-edd6-5061-8cbd-d3c70c7188a4", 00:14:05.022 "is_configured": true, 00:14:05.022 "data_offset": 2048, 00:14:05.022 "data_size": 63488 00:14:05.022 }, 00:14:05.022 { 00:14:05.022 "name": "BaseBdev3", 00:14:05.022 "uuid": "150b2170-8294-5a49-8f24-ba3b809ae5fe", 00:14:05.022 "is_configured": true, 00:14:05.022 "data_offset": 2048, 00:14:05.022 "data_size": 63488 00:14:05.022 } 00:14:05.023 ] 00:14:05.023 }' 00:14:05.023 23:31:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:05.023 23:31:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:05.023 23:31:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:05.023 23:31:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:05.023 23:31:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:06.403 23:31:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:06.403 23:31:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:06.403 23:31:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:06.403 23:31:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:06.403 23:31:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:06.403 23:31:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:06.403 23:31:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:06.403 23:31:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.403 23:31:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:06.403 23:31:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:06.403 23:31:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.403 23:31:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:06.403 "name": "raid_bdev1", 00:14:06.403 "uuid": "5e203c22-de12-4c95-b611-acae9972f6d0", 00:14:06.403 "strip_size_kb": 64, 00:14:06.403 "state": "online", 00:14:06.403 "raid_level": "raid5f", 00:14:06.403 "superblock": true, 00:14:06.403 "num_base_bdevs": 3, 00:14:06.403 "num_base_bdevs_discovered": 3, 00:14:06.403 "num_base_bdevs_operational": 3, 00:14:06.403 "process": { 00:14:06.403 "type": "rebuild", 00:14:06.403 "target": "spare", 00:14:06.403 "progress": { 00:14:06.403 "blocks": 94208, 00:14:06.403 "percent": 74 00:14:06.403 } 00:14:06.403 }, 00:14:06.403 "base_bdevs_list": [ 00:14:06.403 { 00:14:06.403 "name": "spare", 00:14:06.403 "uuid": "dbe41670-4efa-5ac6-b6db-0248d49e52df", 00:14:06.403 "is_configured": true, 00:14:06.403 "data_offset": 2048, 00:14:06.403 "data_size": 63488 00:14:06.403 }, 00:14:06.403 { 00:14:06.403 "name": "BaseBdev2", 00:14:06.403 "uuid": "893e1111-edd6-5061-8cbd-d3c70c7188a4", 00:14:06.403 "is_configured": true, 00:14:06.403 "data_offset": 2048, 00:14:06.403 "data_size": 63488 00:14:06.403 }, 00:14:06.403 { 00:14:06.403 "name": "BaseBdev3", 00:14:06.403 "uuid": "150b2170-8294-5a49-8f24-ba3b809ae5fe", 00:14:06.403 "is_configured": true, 00:14:06.403 "data_offset": 2048, 00:14:06.403 "data_size": 63488 00:14:06.403 } 00:14:06.403 ] 00:14:06.403 }' 00:14:06.403 23:31:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:06.403 23:31:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:06.403 23:31:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:06.403 23:31:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:06.403 23:31:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:07.342 23:31:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:07.342 23:31:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:07.342 23:31:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:07.342 23:31:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:07.342 23:31:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:07.342 23:31:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:07.342 23:31:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:07.342 23:31:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:07.342 23:31:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:07.342 23:31:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:07.342 23:31:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:07.342 23:31:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:07.342 "name": "raid_bdev1", 00:14:07.342 "uuid": "5e203c22-de12-4c95-b611-acae9972f6d0", 00:14:07.342 "strip_size_kb": 64, 00:14:07.342 "state": "online", 00:14:07.342 "raid_level": "raid5f", 00:14:07.342 "superblock": true, 00:14:07.342 "num_base_bdevs": 3, 00:14:07.342 "num_base_bdevs_discovered": 3, 00:14:07.342 "num_base_bdevs_operational": 3, 00:14:07.342 "process": { 00:14:07.342 "type": "rebuild", 00:14:07.342 "target": "spare", 00:14:07.342 "progress": { 00:14:07.342 "blocks": 116736, 00:14:07.342 "percent": 91 00:14:07.342 } 00:14:07.342 }, 00:14:07.342 "base_bdevs_list": [ 00:14:07.342 { 00:14:07.342 "name": "spare", 00:14:07.342 "uuid": "dbe41670-4efa-5ac6-b6db-0248d49e52df", 00:14:07.342 "is_configured": true, 00:14:07.342 "data_offset": 2048, 00:14:07.342 "data_size": 63488 00:14:07.342 }, 00:14:07.342 { 00:14:07.342 "name": "BaseBdev2", 00:14:07.342 "uuid": "893e1111-edd6-5061-8cbd-d3c70c7188a4", 00:14:07.342 "is_configured": true, 00:14:07.342 "data_offset": 2048, 00:14:07.342 "data_size": 63488 00:14:07.342 }, 00:14:07.342 { 00:14:07.342 "name": "BaseBdev3", 00:14:07.342 "uuid": "150b2170-8294-5a49-8f24-ba3b809ae5fe", 00:14:07.342 "is_configured": true, 00:14:07.342 "data_offset": 2048, 00:14:07.342 "data_size": 63488 00:14:07.342 } 00:14:07.342 ] 00:14:07.342 }' 00:14:07.342 23:31:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:07.342 23:31:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:07.342 23:31:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:07.342 23:31:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:07.342 23:31:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:07.602 [2024-09-30 23:31:47.437807] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:14:07.602 [2024-09-30 23:31:47.437883] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:14:07.602 [2024-09-30 23:31:47.437993] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:08.541 23:31:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:08.541 23:31:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:08.541 23:31:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:08.541 23:31:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:08.541 23:31:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:08.541 23:31:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:08.541 23:31:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:08.541 23:31:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:08.541 23:31:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:08.541 23:31:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:08.541 23:31:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:08.541 23:31:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:08.541 "name": "raid_bdev1", 00:14:08.541 "uuid": "5e203c22-de12-4c95-b611-acae9972f6d0", 00:14:08.541 "strip_size_kb": 64, 00:14:08.541 "state": "online", 00:14:08.541 "raid_level": "raid5f", 00:14:08.541 "superblock": true, 00:14:08.541 "num_base_bdevs": 3, 00:14:08.541 "num_base_bdevs_discovered": 3, 00:14:08.541 "num_base_bdevs_operational": 3, 00:14:08.541 "base_bdevs_list": [ 00:14:08.541 { 00:14:08.541 "name": "spare", 00:14:08.541 "uuid": "dbe41670-4efa-5ac6-b6db-0248d49e52df", 00:14:08.541 "is_configured": true, 00:14:08.541 "data_offset": 2048, 00:14:08.541 "data_size": 63488 00:14:08.541 }, 00:14:08.541 { 00:14:08.541 "name": "BaseBdev2", 00:14:08.541 "uuid": "893e1111-edd6-5061-8cbd-d3c70c7188a4", 00:14:08.541 "is_configured": true, 00:14:08.541 "data_offset": 2048, 00:14:08.541 "data_size": 63488 00:14:08.541 }, 00:14:08.541 { 00:14:08.541 "name": "BaseBdev3", 00:14:08.541 "uuid": "150b2170-8294-5a49-8f24-ba3b809ae5fe", 00:14:08.541 "is_configured": true, 00:14:08.541 "data_offset": 2048, 00:14:08.541 "data_size": 63488 00:14:08.541 } 00:14:08.541 ] 00:14:08.541 }' 00:14:08.541 23:31:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:08.541 23:31:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:14:08.541 23:31:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:08.541 23:31:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:14:08.541 23:31:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:14:08.541 23:31:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:08.541 23:31:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:08.541 23:31:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:08.541 23:31:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:08.541 23:31:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:08.541 23:31:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:08.541 23:31:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:08.541 23:31:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:08.541 23:31:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:08.541 23:31:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:08.541 23:31:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:08.541 "name": "raid_bdev1", 00:14:08.541 "uuid": "5e203c22-de12-4c95-b611-acae9972f6d0", 00:14:08.541 "strip_size_kb": 64, 00:14:08.541 "state": "online", 00:14:08.541 "raid_level": "raid5f", 00:14:08.541 "superblock": true, 00:14:08.541 "num_base_bdevs": 3, 00:14:08.541 "num_base_bdevs_discovered": 3, 00:14:08.541 "num_base_bdevs_operational": 3, 00:14:08.541 "base_bdevs_list": [ 00:14:08.541 { 00:14:08.541 "name": "spare", 00:14:08.541 "uuid": "dbe41670-4efa-5ac6-b6db-0248d49e52df", 00:14:08.541 "is_configured": true, 00:14:08.541 "data_offset": 2048, 00:14:08.541 "data_size": 63488 00:14:08.541 }, 00:14:08.541 { 00:14:08.541 "name": "BaseBdev2", 00:14:08.541 "uuid": "893e1111-edd6-5061-8cbd-d3c70c7188a4", 00:14:08.541 "is_configured": true, 00:14:08.541 "data_offset": 2048, 00:14:08.541 "data_size": 63488 00:14:08.541 }, 00:14:08.541 { 00:14:08.541 "name": "BaseBdev3", 00:14:08.541 "uuid": "150b2170-8294-5a49-8f24-ba3b809ae5fe", 00:14:08.541 "is_configured": true, 00:14:08.541 "data_offset": 2048, 00:14:08.541 "data_size": 63488 00:14:08.541 } 00:14:08.541 ] 00:14:08.541 }' 00:14:08.541 23:31:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:08.541 23:31:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:08.541 23:31:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:08.541 23:31:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:08.542 23:31:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:14:08.542 23:31:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:08.542 23:31:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:08.542 23:31:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:08.542 23:31:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:08.542 23:31:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:08.542 23:31:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:08.542 23:31:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:08.542 23:31:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:08.542 23:31:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:08.542 23:31:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:08.542 23:31:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:08.542 23:31:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:08.802 23:31:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:08.802 23:31:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:08.802 23:31:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:08.802 "name": "raid_bdev1", 00:14:08.802 "uuid": "5e203c22-de12-4c95-b611-acae9972f6d0", 00:14:08.802 "strip_size_kb": 64, 00:14:08.802 "state": "online", 00:14:08.802 "raid_level": "raid5f", 00:14:08.802 "superblock": true, 00:14:08.802 "num_base_bdevs": 3, 00:14:08.802 "num_base_bdevs_discovered": 3, 00:14:08.802 "num_base_bdevs_operational": 3, 00:14:08.802 "base_bdevs_list": [ 00:14:08.802 { 00:14:08.802 "name": "spare", 00:14:08.802 "uuid": "dbe41670-4efa-5ac6-b6db-0248d49e52df", 00:14:08.802 "is_configured": true, 00:14:08.802 "data_offset": 2048, 00:14:08.802 "data_size": 63488 00:14:08.802 }, 00:14:08.802 { 00:14:08.802 "name": "BaseBdev2", 00:14:08.802 "uuid": "893e1111-edd6-5061-8cbd-d3c70c7188a4", 00:14:08.802 "is_configured": true, 00:14:08.802 "data_offset": 2048, 00:14:08.802 "data_size": 63488 00:14:08.802 }, 00:14:08.802 { 00:14:08.802 "name": "BaseBdev3", 00:14:08.802 "uuid": "150b2170-8294-5a49-8f24-ba3b809ae5fe", 00:14:08.802 "is_configured": true, 00:14:08.802 "data_offset": 2048, 00:14:08.802 "data_size": 63488 00:14:08.802 } 00:14:08.802 ] 00:14:08.802 }' 00:14:08.802 23:31:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:08.802 23:31:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:09.062 23:31:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:09.062 23:31:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:09.062 23:31:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:09.062 [2024-09-30 23:31:48.792256] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:09.062 [2024-09-30 23:31:48.792334] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:09.062 [2024-09-30 23:31:48.792428] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:09.062 [2024-09-30 23:31:48.792520] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:09.062 [2024-09-30 23:31:48.792607] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:14:09.062 23:31:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:09.062 23:31:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:14:09.062 23:31:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:09.062 23:31:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:09.062 23:31:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:09.062 23:31:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:09.062 23:31:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:14:09.062 23:31:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:14:09.062 23:31:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:14:09.062 23:31:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:14:09.062 23:31:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:09.062 23:31:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:14:09.062 23:31:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:09.062 23:31:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:09.062 23:31:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:09.062 23:31:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:14:09.062 23:31:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:09.062 23:31:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:09.062 23:31:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:14:09.323 /dev/nbd0 00:14:09.323 23:31:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:09.323 23:31:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:09.323 23:31:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:14:09.323 23:31:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:14:09.323 23:31:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:14:09.323 23:31:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:14:09.323 23:31:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:14:09.323 23:31:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:14:09.323 23:31:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:14:09.323 23:31:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:14:09.323 23:31:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:09.323 1+0 records in 00:14:09.323 1+0 records out 00:14:09.323 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000447242 s, 9.2 MB/s 00:14:09.323 23:31:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:09.323 23:31:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:14:09.323 23:31:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:09.323 23:31:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:14:09.323 23:31:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:14:09.323 23:31:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:09.323 23:31:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:09.323 23:31:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:14:09.583 /dev/nbd1 00:14:09.583 23:31:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:14:09.583 23:31:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:14:09.583 23:31:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:14:09.583 23:31:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:14:09.583 23:31:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:14:09.584 23:31:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:14:09.584 23:31:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:14:09.584 23:31:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:14:09.584 23:31:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:14:09.584 23:31:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:14:09.584 23:31:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:09.584 1+0 records in 00:14:09.584 1+0 records out 00:14:09.584 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000421374 s, 9.7 MB/s 00:14:09.584 23:31:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:09.584 23:31:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:14:09.584 23:31:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:09.584 23:31:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:14:09.584 23:31:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:14:09.584 23:31:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:09.584 23:31:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:09.584 23:31:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:14:09.584 23:31:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:14:09.584 23:31:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:09.584 23:31:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:09.584 23:31:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:09.584 23:31:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:14:09.584 23:31:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:09.584 23:31:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:09.843 23:31:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:09.843 23:31:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:09.843 23:31:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:09.843 23:31:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:09.843 23:31:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:09.843 23:31:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:09.843 23:31:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:14:09.843 23:31:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:14:09.843 23:31:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:09.843 23:31:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:14:10.102 23:31:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:10.102 23:31:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:10.102 23:31:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:10.102 23:31:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:10.102 23:31:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:10.102 23:31:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:10.102 23:31:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:14:10.102 23:31:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:14:10.102 23:31:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:14:10.102 23:31:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:14:10.102 23:31:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:10.102 23:31:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:10.102 23:31:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:10.102 23:31:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:10.102 23:31:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:10.102 23:31:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:10.102 [2024-09-30 23:31:49.851020] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:10.102 [2024-09-30 23:31:49.851081] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:10.102 [2024-09-30 23:31:49.851105] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:14:10.102 [2024-09-30 23:31:49.851114] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:10.102 [2024-09-30 23:31:49.853553] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:10.102 [2024-09-30 23:31:49.853635] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:10.102 [2024-09-30 23:31:49.853725] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:14:10.102 [2024-09-30 23:31:49.853783] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:10.102 [2024-09-30 23:31:49.853930] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:10.102 [2024-09-30 23:31:49.854038] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:10.102 spare 00:14:10.102 23:31:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:10.102 23:31:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:14:10.102 23:31:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:10.102 23:31:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:10.102 [2024-09-30 23:31:49.953941] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006600 00:14:10.102 [2024-09-30 23:31:49.953965] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:14:10.102 [2024-09-30 23:31:49.954236] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000047560 00:14:10.102 [2024-09-30 23:31:49.954691] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006600 00:14:10.102 [2024-09-30 23:31:49.954712] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006600 00:14:10.102 [2024-09-30 23:31:49.954910] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:10.362 23:31:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:10.362 23:31:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:14:10.362 23:31:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:10.362 23:31:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:10.362 23:31:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:10.362 23:31:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:10.362 23:31:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:10.362 23:31:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:10.362 23:31:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:10.362 23:31:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:10.362 23:31:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:10.362 23:31:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:10.362 23:31:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:10.362 23:31:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:10.362 23:31:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:10.362 23:31:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:10.362 23:31:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:10.362 "name": "raid_bdev1", 00:14:10.362 "uuid": "5e203c22-de12-4c95-b611-acae9972f6d0", 00:14:10.362 "strip_size_kb": 64, 00:14:10.362 "state": "online", 00:14:10.362 "raid_level": "raid5f", 00:14:10.362 "superblock": true, 00:14:10.362 "num_base_bdevs": 3, 00:14:10.362 "num_base_bdevs_discovered": 3, 00:14:10.362 "num_base_bdevs_operational": 3, 00:14:10.362 "base_bdevs_list": [ 00:14:10.362 { 00:14:10.362 "name": "spare", 00:14:10.362 "uuid": "dbe41670-4efa-5ac6-b6db-0248d49e52df", 00:14:10.362 "is_configured": true, 00:14:10.362 "data_offset": 2048, 00:14:10.362 "data_size": 63488 00:14:10.362 }, 00:14:10.362 { 00:14:10.362 "name": "BaseBdev2", 00:14:10.362 "uuid": "893e1111-edd6-5061-8cbd-d3c70c7188a4", 00:14:10.362 "is_configured": true, 00:14:10.362 "data_offset": 2048, 00:14:10.362 "data_size": 63488 00:14:10.362 }, 00:14:10.362 { 00:14:10.362 "name": "BaseBdev3", 00:14:10.362 "uuid": "150b2170-8294-5a49-8f24-ba3b809ae5fe", 00:14:10.362 "is_configured": true, 00:14:10.362 "data_offset": 2048, 00:14:10.362 "data_size": 63488 00:14:10.362 } 00:14:10.362 ] 00:14:10.362 }' 00:14:10.362 23:31:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:10.362 23:31:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:10.621 23:31:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:10.621 23:31:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:10.621 23:31:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:10.622 23:31:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:10.622 23:31:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:10.622 23:31:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:10.622 23:31:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:10.622 23:31:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:10.622 23:31:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:10.622 23:31:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:10.622 23:31:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:10.622 "name": "raid_bdev1", 00:14:10.622 "uuid": "5e203c22-de12-4c95-b611-acae9972f6d0", 00:14:10.622 "strip_size_kb": 64, 00:14:10.622 "state": "online", 00:14:10.622 "raid_level": "raid5f", 00:14:10.622 "superblock": true, 00:14:10.622 "num_base_bdevs": 3, 00:14:10.622 "num_base_bdevs_discovered": 3, 00:14:10.622 "num_base_bdevs_operational": 3, 00:14:10.622 "base_bdevs_list": [ 00:14:10.622 { 00:14:10.622 "name": "spare", 00:14:10.622 "uuid": "dbe41670-4efa-5ac6-b6db-0248d49e52df", 00:14:10.622 "is_configured": true, 00:14:10.622 "data_offset": 2048, 00:14:10.622 "data_size": 63488 00:14:10.622 }, 00:14:10.622 { 00:14:10.622 "name": "BaseBdev2", 00:14:10.622 "uuid": "893e1111-edd6-5061-8cbd-d3c70c7188a4", 00:14:10.622 "is_configured": true, 00:14:10.622 "data_offset": 2048, 00:14:10.622 "data_size": 63488 00:14:10.622 }, 00:14:10.622 { 00:14:10.622 "name": "BaseBdev3", 00:14:10.622 "uuid": "150b2170-8294-5a49-8f24-ba3b809ae5fe", 00:14:10.622 "is_configured": true, 00:14:10.622 "data_offset": 2048, 00:14:10.622 "data_size": 63488 00:14:10.622 } 00:14:10.622 ] 00:14:10.622 }' 00:14:10.622 23:31:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:10.622 23:31:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:10.622 23:31:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:10.881 23:31:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:10.881 23:31:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:10.881 23:31:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:10.881 23:31:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:14:10.881 23:31:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:10.881 23:31:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:10.881 23:31:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:14:10.881 23:31:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:14:10.881 23:31:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:10.881 23:31:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:10.881 [2024-09-30 23:31:50.545937] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:10.881 23:31:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:10.881 23:31:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:14:10.881 23:31:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:10.881 23:31:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:10.881 23:31:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:10.881 23:31:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:10.881 23:31:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:10.881 23:31:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:10.881 23:31:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:10.881 23:31:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:10.881 23:31:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:10.881 23:31:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:10.881 23:31:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:10.881 23:31:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:10.881 23:31:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:10.881 23:31:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:10.881 23:31:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:10.881 "name": "raid_bdev1", 00:14:10.881 "uuid": "5e203c22-de12-4c95-b611-acae9972f6d0", 00:14:10.881 "strip_size_kb": 64, 00:14:10.881 "state": "online", 00:14:10.881 "raid_level": "raid5f", 00:14:10.881 "superblock": true, 00:14:10.881 "num_base_bdevs": 3, 00:14:10.881 "num_base_bdevs_discovered": 2, 00:14:10.881 "num_base_bdevs_operational": 2, 00:14:10.881 "base_bdevs_list": [ 00:14:10.881 { 00:14:10.881 "name": null, 00:14:10.881 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:10.881 "is_configured": false, 00:14:10.881 "data_offset": 0, 00:14:10.881 "data_size": 63488 00:14:10.881 }, 00:14:10.881 { 00:14:10.881 "name": "BaseBdev2", 00:14:10.881 "uuid": "893e1111-edd6-5061-8cbd-d3c70c7188a4", 00:14:10.881 "is_configured": true, 00:14:10.881 "data_offset": 2048, 00:14:10.881 "data_size": 63488 00:14:10.881 }, 00:14:10.881 { 00:14:10.881 "name": "BaseBdev3", 00:14:10.881 "uuid": "150b2170-8294-5a49-8f24-ba3b809ae5fe", 00:14:10.881 "is_configured": true, 00:14:10.881 "data_offset": 2048, 00:14:10.881 "data_size": 63488 00:14:10.881 } 00:14:10.881 ] 00:14:10.881 }' 00:14:10.881 23:31:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:10.882 23:31:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:11.141 23:31:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:11.141 23:31:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:11.141 23:31:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:11.141 [2024-09-30 23:31:50.989137] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:11.141 [2024-09-30 23:31:50.989335] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:14:11.141 [2024-09-30 23:31:50.989392] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:14:11.141 [2024-09-30 23:31:50.989466] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:11.400 [2024-09-30 23:31:50.995792] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000047630 00:14:11.400 23:31:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:11.400 23:31:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:14:11.400 [2024-09-30 23:31:50.998236] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:12.337 23:31:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:12.337 23:31:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:12.337 23:31:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:12.337 23:31:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:12.337 23:31:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:12.337 23:31:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:12.337 23:31:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:12.337 23:31:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:12.337 23:31:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:12.337 23:31:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:12.337 23:31:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:12.337 "name": "raid_bdev1", 00:14:12.337 "uuid": "5e203c22-de12-4c95-b611-acae9972f6d0", 00:14:12.337 "strip_size_kb": 64, 00:14:12.337 "state": "online", 00:14:12.337 "raid_level": "raid5f", 00:14:12.337 "superblock": true, 00:14:12.337 "num_base_bdevs": 3, 00:14:12.337 "num_base_bdevs_discovered": 3, 00:14:12.337 "num_base_bdevs_operational": 3, 00:14:12.337 "process": { 00:14:12.337 "type": "rebuild", 00:14:12.337 "target": "spare", 00:14:12.337 "progress": { 00:14:12.337 "blocks": 20480, 00:14:12.337 "percent": 16 00:14:12.337 } 00:14:12.337 }, 00:14:12.337 "base_bdevs_list": [ 00:14:12.337 { 00:14:12.337 "name": "spare", 00:14:12.337 "uuid": "dbe41670-4efa-5ac6-b6db-0248d49e52df", 00:14:12.337 "is_configured": true, 00:14:12.337 "data_offset": 2048, 00:14:12.337 "data_size": 63488 00:14:12.337 }, 00:14:12.337 { 00:14:12.337 "name": "BaseBdev2", 00:14:12.337 "uuid": "893e1111-edd6-5061-8cbd-d3c70c7188a4", 00:14:12.337 "is_configured": true, 00:14:12.337 "data_offset": 2048, 00:14:12.337 "data_size": 63488 00:14:12.337 }, 00:14:12.337 { 00:14:12.337 "name": "BaseBdev3", 00:14:12.337 "uuid": "150b2170-8294-5a49-8f24-ba3b809ae5fe", 00:14:12.337 "is_configured": true, 00:14:12.337 "data_offset": 2048, 00:14:12.337 "data_size": 63488 00:14:12.337 } 00:14:12.337 ] 00:14:12.337 }' 00:14:12.337 23:31:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:12.337 23:31:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:12.337 23:31:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:12.337 23:31:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:12.337 23:31:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:14:12.337 23:31:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:12.337 23:31:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:12.337 [2024-09-30 23:31:52.157889] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:12.597 [2024-09-30 23:31:52.206484] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:12.597 [2024-09-30 23:31:52.206575] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:12.597 [2024-09-30 23:31:52.206596] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:12.597 [2024-09-30 23:31:52.206604] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:12.597 23:31:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:12.597 23:31:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:14:12.597 23:31:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:12.597 23:31:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:12.597 23:31:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:12.597 23:31:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:12.597 23:31:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:12.597 23:31:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:12.597 23:31:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:12.597 23:31:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:12.597 23:31:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:12.597 23:31:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:12.597 23:31:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:12.597 23:31:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:12.597 23:31:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:12.597 23:31:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:12.597 23:31:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:12.597 "name": "raid_bdev1", 00:14:12.597 "uuid": "5e203c22-de12-4c95-b611-acae9972f6d0", 00:14:12.597 "strip_size_kb": 64, 00:14:12.597 "state": "online", 00:14:12.597 "raid_level": "raid5f", 00:14:12.597 "superblock": true, 00:14:12.597 "num_base_bdevs": 3, 00:14:12.597 "num_base_bdevs_discovered": 2, 00:14:12.597 "num_base_bdevs_operational": 2, 00:14:12.597 "base_bdevs_list": [ 00:14:12.597 { 00:14:12.597 "name": null, 00:14:12.597 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:12.597 "is_configured": false, 00:14:12.597 "data_offset": 0, 00:14:12.597 "data_size": 63488 00:14:12.597 }, 00:14:12.597 { 00:14:12.597 "name": "BaseBdev2", 00:14:12.597 "uuid": "893e1111-edd6-5061-8cbd-d3c70c7188a4", 00:14:12.597 "is_configured": true, 00:14:12.597 "data_offset": 2048, 00:14:12.597 "data_size": 63488 00:14:12.597 }, 00:14:12.597 { 00:14:12.597 "name": "BaseBdev3", 00:14:12.597 "uuid": "150b2170-8294-5a49-8f24-ba3b809ae5fe", 00:14:12.597 "is_configured": true, 00:14:12.597 "data_offset": 2048, 00:14:12.597 "data_size": 63488 00:14:12.597 } 00:14:12.597 ] 00:14:12.597 }' 00:14:12.597 23:31:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:12.597 23:31:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:12.855 23:31:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:12.855 23:31:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:12.855 23:31:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:12.855 [2024-09-30 23:31:52.682015] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:12.855 [2024-09-30 23:31:52.682103] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:12.855 [2024-09-30 23:31:52.682141] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b780 00:14:12.855 [2024-09-30 23:31:52.682167] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:12.856 [2024-09-30 23:31:52.682658] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:12.856 [2024-09-30 23:31:52.682711] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:12.856 [2024-09-30 23:31:52.682818] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:14:12.856 [2024-09-30 23:31:52.682854] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:14:12.856 [2024-09-30 23:31:52.682911] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:14:12.856 [2024-09-30 23:31:52.682965] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:12.856 [2024-09-30 23:31:52.686761] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000047700 00:14:12.856 spare 00:14:12.856 23:31:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:12.856 23:31:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:14:12.856 [2024-09-30 23:31:52.689166] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:14.233 23:31:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:14.234 23:31:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:14.234 23:31:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:14.234 23:31:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:14.234 23:31:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:14.234 23:31:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:14.234 23:31:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:14.234 23:31:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:14.234 23:31:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:14.234 23:31:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:14.234 23:31:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:14.234 "name": "raid_bdev1", 00:14:14.234 "uuid": "5e203c22-de12-4c95-b611-acae9972f6d0", 00:14:14.234 "strip_size_kb": 64, 00:14:14.234 "state": "online", 00:14:14.234 "raid_level": "raid5f", 00:14:14.234 "superblock": true, 00:14:14.234 "num_base_bdevs": 3, 00:14:14.234 "num_base_bdevs_discovered": 3, 00:14:14.234 "num_base_bdevs_operational": 3, 00:14:14.234 "process": { 00:14:14.234 "type": "rebuild", 00:14:14.234 "target": "spare", 00:14:14.234 "progress": { 00:14:14.234 "blocks": 20480, 00:14:14.234 "percent": 16 00:14:14.234 } 00:14:14.234 }, 00:14:14.234 "base_bdevs_list": [ 00:14:14.234 { 00:14:14.234 "name": "spare", 00:14:14.234 "uuid": "dbe41670-4efa-5ac6-b6db-0248d49e52df", 00:14:14.234 "is_configured": true, 00:14:14.234 "data_offset": 2048, 00:14:14.234 "data_size": 63488 00:14:14.234 }, 00:14:14.234 { 00:14:14.234 "name": "BaseBdev2", 00:14:14.234 "uuid": "893e1111-edd6-5061-8cbd-d3c70c7188a4", 00:14:14.234 "is_configured": true, 00:14:14.234 "data_offset": 2048, 00:14:14.234 "data_size": 63488 00:14:14.234 }, 00:14:14.234 { 00:14:14.234 "name": "BaseBdev3", 00:14:14.234 "uuid": "150b2170-8294-5a49-8f24-ba3b809ae5fe", 00:14:14.234 "is_configured": true, 00:14:14.234 "data_offset": 2048, 00:14:14.234 "data_size": 63488 00:14:14.234 } 00:14:14.234 ] 00:14:14.234 }' 00:14:14.234 23:31:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:14.234 23:31:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:14.234 23:31:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:14.234 23:31:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:14.234 23:31:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:14:14.234 23:31:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:14.234 23:31:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:14.234 [2024-09-30 23:31:53.853446] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:14.234 [2024-09-30 23:31:53.896980] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:14.234 [2024-09-30 23:31:53.897084] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:14.234 [2024-09-30 23:31:53.897119] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:14.234 [2024-09-30 23:31:53.897146] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:14.234 23:31:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:14.234 23:31:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:14:14.234 23:31:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:14.234 23:31:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:14.234 23:31:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:14.234 23:31:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:14.234 23:31:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:14.234 23:31:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:14.234 23:31:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:14.234 23:31:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:14.234 23:31:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:14.234 23:31:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:14.234 23:31:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:14.234 23:31:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:14.234 23:31:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:14.234 23:31:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:14.234 23:31:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:14.234 "name": "raid_bdev1", 00:14:14.234 "uuid": "5e203c22-de12-4c95-b611-acae9972f6d0", 00:14:14.234 "strip_size_kb": 64, 00:14:14.234 "state": "online", 00:14:14.234 "raid_level": "raid5f", 00:14:14.234 "superblock": true, 00:14:14.234 "num_base_bdevs": 3, 00:14:14.234 "num_base_bdevs_discovered": 2, 00:14:14.234 "num_base_bdevs_operational": 2, 00:14:14.234 "base_bdevs_list": [ 00:14:14.234 { 00:14:14.234 "name": null, 00:14:14.234 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:14.234 "is_configured": false, 00:14:14.234 "data_offset": 0, 00:14:14.234 "data_size": 63488 00:14:14.234 }, 00:14:14.234 { 00:14:14.234 "name": "BaseBdev2", 00:14:14.234 "uuid": "893e1111-edd6-5061-8cbd-d3c70c7188a4", 00:14:14.234 "is_configured": true, 00:14:14.234 "data_offset": 2048, 00:14:14.234 "data_size": 63488 00:14:14.234 }, 00:14:14.234 { 00:14:14.234 "name": "BaseBdev3", 00:14:14.234 "uuid": "150b2170-8294-5a49-8f24-ba3b809ae5fe", 00:14:14.234 "is_configured": true, 00:14:14.234 "data_offset": 2048, 00:14:14.234 "data_size": 63488 00:14:14.234 } 00:14:14.234 ] 00:14:14.234 }' 00:14:14.234 23:31:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:14.235 23:31:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:14.803 23:31:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:14.803 23:31:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:14.803 23:31:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:14.803 23:31:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:14.803 23:31:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:14.803 23:31:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:14.803 23:31:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:14.803 23:31:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:14.803 23:31:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:14.803 23:31:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:14.803 23:31:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:14.803 "name": "raid_bdev1", 00:14:14.803 "uuid": "5e203c22-de12-4c95-b611-acae9972f6d0", 00:14:14.803 "strip_size_kb": 64, 00:14:14.803 "state": "online", 00:14:14.803 "raid_level": "raid5f", 00:14:14.803 "superblock": true, 00:14:14.803 "num_base_bdevs": 3, 00:14:14.803 "num_base_bdevs_discovered": 2, 00:14:14.803 "num_base_bdevs_operational": 2, 00:14:14.803 "base_bdevs_list": [ 00:14:14.803 { 00:14:14.803 "name": null, 00:14:14.803 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:14.803 "is_configured": false, 00:14:14.803 "data_offset": 0, 00:14:14.803 "data_size": 63488 00:14:14.803 }, 00:14:14.803 { 00:14:14.803 "name": "BaseBdev2", 00:14:14.803 "uuid": "893e1111-edd6-5061-8cbd-d3c70c7188a4", 00:14:14.803 "is_configured": true, 00:14:14.803 "data_offset": 2048, 00:14:14.803 "data_size": 63488 00:14:14.803 }, 00:14:14.803 { 00:14:14.803 "name": "BaseBdev3", 00:14:14.803 "uuid": "150b2170-8294-5a49-8f24-ba3b809ae5fe", 00:14:14.803 "is_configured": true, 00:14:14.803 "data_offset": 2048, 00:14:14.803 "data_size": 63488 00:14:14.803 } 00:14:14.803 ] 00:14:14.803 }' 00:14:14.803 23:31:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:14.803 23:31:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:14.803 23:31:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:14.803 23:31:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:14.804 23:31:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:14:14.804 23:31:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:14.804 23:31:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:14.804 23:31:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:14.804 23:31:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:14:14.804 23:31:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:14.804 23:31:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:14.804 [2024-09-30 23:31:54.519988] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:14:14.804 [2024-09-30 23:31:54.520036] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:14.804 [2024-09-30 23:31:54.520058] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:14:14.804 [2024-09-30 23:31:54.520069] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:14.804 [2024-09-30 23:31:54.520477] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:14.804 [2024-09-30 23:31:54.520496] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:14.804 [2024-09-30 23:31:54.520560] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:14:14.804 [2024-09-30 23:31:54.520576] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:14:14.804 [2024-09-30 23:31:54.520583] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:14:14.804 [2024-09-30 23:31:54.520595] bdev_raid.c:3884:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:14:14.804 BaseBdev1 00:14:14.804 23:31:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:14.804 23:31:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:14:15.741 23:31:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:14:15.741 23:31:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:15.741 23:31:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:15.741 23:31:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:15.741 23:31:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:15.741 23:31:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:15.741 23:31:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:15.741 23:31:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:15.741 23:31:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:15.741 23:31:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:15.741 23:31:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:15.741 23:31:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:15.741 23:31:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:15.741 23:31:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:15.741 23:31:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:15.741 23:31:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:15.741 "name": "raid_bdev1", 00:14:15.741 "uuid": "5e203c22-de12-4c95-b611-acae9972f6d0", 00:14:15.741 "strip_size_kb": 64, 00:14:15.741 "state": "online", 00:14:15.741 "raid_level": "raid5f", 00:14:15.741 "superblock": true, 00:14:15.741 "num_base_bdevs": 3, 00:14:15.741 "num_base_bdevs_discovered": 2, 00:14:15.741 "num_base_bdevs_operational": 2, 00:14:15.741 "base_bdevs_list": [ 00:14:15.741 { 00:14:15.741 "name": null, 00:14:15.741 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:15.741 "is_configured": false, 00:14:15.741 "data_offset": 0, 00:14:15.741 "data_size": 63488 00:14:15.741 }, 00:14:15.741 { 00:14:15.741 "name": "BaseBdev2", 00:14:15.741 "uuid": "893e1111-edd6-5061-8cbd-d3c70c7188a4", 00:14:15.741 "is_configured": true, 00:14:15.741 "data_offset": 2048, 00:14:15.741 "data_size": 63488 00:14:15.741 }, 00:14:15.741 { 00:14:15.742 "name": "BaseBdev3", 00:14:15.742 "uuid": "150b2170-8294-5a49-8f24-ba3b809ae5fe", 00:14:15.742 "is_configured": true, 00:14:15.742 "data_offset": 2048, 00:14:15.742 "data_size": 63488 00:14:15.742 } 00:14:15.742 ] 00:14:15.742 }' 00:14:15.742 23:31:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:15.742 23:31:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:16.308 23:31:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:16.308 23:31:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:16.308 23:31:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:16.308 23:31:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:16.308 23:31:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:16.308 23:31:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:16.308 23:31:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:16.308 23:31:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:16.308 23:31:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:16.308 23:31:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:16.308 23:31:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:16.308 "name": "raid_bdev1", 00:14:16.308 "uuid": "5e203c22-de12-4c95-b611-acae9972f6d0", 00:14:16.308 "strip_size_kb": 64, 00:14:16.308 "state": "online", 00:14:16.308 "raid_level": "raid5f", 00:14:16.308 "superblock": true, 00:14:16.308 "num_base_bdevs": 3, 00:14:16.308 "num_base_bdevs_discovered": 2, 00:14:16.308 "num_base_bdevs_operational": 2, 00:14:16.308 "base_bdevs_list": [ 00:14:16.308 { 00:14:16.308 "name": null, 00:14:16.308 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:16.308 "is_configured": false, 00:14:16.308 "data_offset": 0, 00:14:16.308 "data_size": 63488 00:14:16.308 }, 00:14:16.308 { 00:14:16.308 "name": "BaseBdev2", 00:14:16.308 "uuid": "893e1111-edd6-5061-8cbd-d3c70c7188a4", 00:14:16.308 "is_configured": true, 00:14:16.308 "data_offset": 2048, 00:14:16.308 "data_size": 63488 00:14:16.308 }, 00:14:16.308 { 00:14:16.308 "name": "BaseBdev3", 00:14:16.308 "uuid": "150b2170-8294-5a49-8f24-ba3b809ae5fe", 00:14:16.308 "is_configured": true, 00:14:16.308 "data_offset": 2048, 00:14:16.308 "data_size": 63488 00:14:16.308 } 00:14:16.308 ] 00:14:16.308 }' 00:14:16.308 23:31:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:16.308 23:31:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:16.308 23:31:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:16.308 23:31:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:16.308 23:31:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:16.308 23:31:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@650 -- # local es=0 00:14:16.308 23:31:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:16.308 23:31:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:14:16.308 23:31:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:16.308 23:31:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:14:16.308 23:31:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:16.308 23:31:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:16.308 23:31:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:16.308 23:31:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:16.308 [2024-09-30 23:31:56.093445] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:16.308 [2024-09-30 23:31:56.093605] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:14:16.308 [2024-09-30 23:31:56.093658] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:14:16.308 request: 00:14:16.308 { 00:14:16.308 "base_bdev": "BaseBdev1", 00:14:16.308 "raid_bdev": "raid_bdev1", 00:14:16.308 "method": "bdev_raid_add_base_bdev", 00:14:16.308 "req_id": 1 00:14:16.308 } 00:14:16.308 Got JSON-RPC error response 00:14:16.308 response: 00:14:16.308 { 00:14:16.308 "code": -22, 00:14:16.308 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:14:16.308 } 00:14:16.308 23:31:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:14:16.308 23:31:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@653 -- # es=1 00:14:16.308 23:31:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:16.308 23:31:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:16.308 23:31:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:16.308 23:31:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:14:17.683 23:31:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:14:17.683 23:31:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:17.683 23:31:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:17.683 23:31:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:17.683 23:31:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:17.683 23:31:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:17.683 23:31:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:17.683 23:31:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:17.683 23:31:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:17.683 23:31:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:17.683 23:31:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:17.683 23:31:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:17.683 23:31:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:17.683 23:31:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:17.683 23:31:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:17.683 23:31:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:17.683 "name": "raid_bdev1", 00:14:17.683 "uuid": "5e203c22-de12-4c95-b611-acae9972f6d0", 00:14:17.683 "strip_size_kb": 64, 00:14:17.683 "state": "online", 00:14:17.683 "raid_level": "raid5f", 00:14:17.683 "superblock": true, 00:14:17.683 "num_base_bdevs": 3, 00:14:17.683 "num_base_bdevs_discovered": 2, 00:14:17.683 "num_base_bdevs_operational": 2, 00:14:17.683 "base_bdevs_list": [ 00:14:17.683 { 00:14:17.683 "name": null, 00:14:17.683 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:17.683 "is_configured": false, 00:14:17.683 "data_offset": 0, 00:14:17.683 "data_size": 63488 00:14:17.683 }, 00:14:17.683 { 00:14:17.683 "name": "BaseBdev2", 00:14:17.683 "uuid": "893e1111-edd6-5061-8cbd-d3c70c7188a4", 00:14:17.683 "is_configured": true, 00:14:17.683 "data_offset": 2048, 00:14:17.683 "data_size": 63488 00:14:17.683 }, 00:14:17.683 { 00:14:17.683 "name": "BaseBdev3", 00:14:17.683 "uuid": "150b2170-8294-5a49-8f24-ba3b809ae5fe", 00:14:17.683 "is_configured": true, 00:14:17.683 "data_offset": 2048, 00:14:17.683 "data_size": 63488 00:14:17.683 } 00:14:17.683 ] 00:14:17.683 }' 00:14:17.683 23:31:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:17.683 23:31:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:17.941 23:31:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:17.941 23:31:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:17.941 23:31:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:17.941 23:31:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:17.941 23:31:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:17.941 23:31:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:17.941 23:31:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:17.941 23:31:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:17.941 23:31:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:17.941 23:31:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:17.941 23:31:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:17.941 "name": "raid_bdev1", 00:14:17.941 "uuid": "5e203c22-de12-4c95-b611-acae9972f6d0", 00:14:17.941 "strip_size_kb": 64, 00:14:17.941 "state": "online", 00:14:17.941 "raid_level": "raid5f", 00:14:17.941 "superblock": true, 00:14:17.941 "num_base_bdevs": 3, 00:14:17.941 "num_base_bdevs_discovered": 2, 00:14:17.941 "num_base_bdevs_operational": 2, 00:14:17.941 "base_bdevs_list": [ 00:14:17.941 { 00:14:17.941 "name": null, 00:14:17.941 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:17.941 "is_configured": false, 00:14:17.941 "data_offset": 0, 00:14:17.941 "data_size": 63488 00:14:17.941 }, 00:14:17.941 { 00:14:17.941 "name": "BaseBdev2", 00:14:17.941 "uuid": "893e1111-edd6-5061-8cbd-d3c70c7188a4", 00:14:17.941 "is_configured": true, 00:14:17.941 "data_offset": 2048, 00:14:17.941 "data_size": 63488 00:14:17.941 }, 00:14:17.941 { 00:14:17.941 "name": "BaseBdev3", 00:14:17.941 "uuid": "150b2170-8294-5a49-8f24-ba3b809ae5fe", 00:14:17.941 "is_configured": true, 00:14:17.941 "data_offset": 2048, 00:14:17.941 "data_size": 63488 00:14:17.941 } 00:14:17.941 ] 00:14:17.941 }' 00:14:17.941 23:31:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:17.941 23:31:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:17.941 23:31:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:17.941 23:31:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:17.941 23:31:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 92559 00:14:17.941 23:31:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@950 -- # '[' -z 92559 ']' 00:14:17.941 23:31:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@954 -- # kill -0 92559 00:14:17.941 23:31:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@955 -- # uname 00:14:17.941 23:31:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:17.941 23:31:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 92559 00:14:17.941 killing process with pid 92559 00:14:17.941 Received shutdown signal, test time was about 60.000000 seconds 00:14:17.941 00:14:17.941 Latency(us) 00:14:17.942 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:17.942 =================================================================================================================== 00:14:17.942 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:17.942 23:31:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:17.942 23:31:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:17.942 23:31:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 92559' 00:14:17.942 23:31:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@969 -- # kill 92559 00:14:17.942 [2024-09-30 23:31:57.712732] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:17.942 [2024-09-30 23:31:57.712821] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:17.942 [2024-09-30 23:31:57.712888] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:17.942 [2024-09-30 23:31:57.712898] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state offline 00:14:17.942 23:31:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@974 -- # wait 92559 00:14:17.942 [2024-09-30 23:31:57.787575] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:18.509 23:31:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:14:18.509 00:14:18.509 real 0m21.648s 00:14:18.509 user 0m27.990s 00:14:18.509 sys 0m2.739s 00:14:18.509 ************************************ 00:14:18.509 END TEST raid5f_rebuild_test_sb 00:14:18.509 ************************************ 00:14:18.509 23:31:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:18.509 23:31:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:18.509 23:31:58 bdev_raid -- bdev/bdev_raid.sh@985 -- # for n in {3..4} 00:14:18.509 23:31:58 bdev_raid -- bdev/bdev_raid.sh@986 -- # run_test raid5f_state_function_test raid_state_function_test raid5f 4 false 00:14:18.509 23:31:58 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:14:18.509 23:31:58 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:18.509 23:31:58 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:18.509 ************************************ 00:14:18.509 START TEST raid5f_state_function_test 00:14:18.509 ************************************ 00:14:18.509 23:31:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test raid5f 4 false 00:14:18.509 23:31:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:14:18.509 23:31:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:14:18.509 23:31:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:14:18.509 23:31:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:14:18.509 23:31:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:14:18.509 23:31:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:18.509 23:31:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:14:18.509 23:31:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:18.509 23:31:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:18.509 23:31:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:14:18.509 23:31:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:18.509 23:31:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:18.509 23:31:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:14:18.509 23:31:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:18.509 23:31:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:18.509 23:31:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:14:18.509 23:31:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:18.509 23:31:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:18.509 23:31:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:14:18.509 23:31:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:14:18.509 23:31:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:14:18.509 23:31:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:14:18.509 23:31:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:14:18.509 23:31:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:14:18.509 23:31:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:14:18.509 23:31:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:14:18.509 23:31:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:14:18.509 23:31:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:14:18.509 23:31:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:14:18.509 23:31:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=93294 00:14:18.509 23:31:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:14:18.509 23:31:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 93294' 00:14:18.509 Process raid pid: 93294 00:14:18.509 23:31:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 93294 00:14:18.509 23:31:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 93294 ']' 00:14:18.509 23:31:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:18.509 23:31:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:18.509 23:31:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:18.509 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:18.509 23:31:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:18.509 23:31:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:18.509 [2024-09-30 23:31:58.321778] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:14:18.509 [2024-09-30 23:31:58.322033] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:18.767 [2024-09-30 23:31:58.484788] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:18.767 [2024-09-30 23:31:58.555961] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:14:19.026 [2024-09-30 23:31:58.630817] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:19.026 [2024-09-30 23:31:58.630889] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:19.285 23:31:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:19.285 23:31:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:14:19.285 23:31:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:14:19.285 23:31:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:19.285 23:31:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:19.546 [2024-09-30 23:31:59.142047] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:19.546 [2024-09-30 23:31:59.142101] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:19.546 [2024-09-30 23:31:59.142114] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:19.546 [2024-09-30 23:31:59.142123] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:19.546 [2024-09-30 23:31:59.142129] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:19.546 [2024-09-30 23:31:59.142143] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:19.546 [2024-09-30 23:31:59.142149] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:14:19.546 [2024-09-30 23:31:59.142158] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:14:19.546 23:31:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:19.546 23:31:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:19.546 23:31:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:19.546 23:31:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:19.546 23:31:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:19.546 23:31:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:19.546 23:31:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:19.546 23:31:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:19.546 23:31:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:19.546 23:31:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:19.546 23:31:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:19.546 23:31:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:19.546 23:31:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:19.546 23:31:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:19.546 23:31:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:19.546 23:31:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:19.546 23:31:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:19.546 "name": "Existed_Raid", 00:14:19.546 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:19.546 "strip_size_kb": 64, 00:14:19.547 "state": "configuring", 00:14:19.547 "raid_level": "raid5f", 00:14:19.547 "superblock": false, 00:14:19.547 "num_base_bdevs": 4, 00:14:19.547 "num_base_bdevs_discovered": 0, 00:14:19.547 "num_base_bdevs_operational": 4, 00:14:19.547 "base_bdevs_list": [ 00:14:19.547 { 00:14:19.547 "name": "BaseBdev1", 00:14:19.547 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:19.547 "is_configured": false, 00:14:19.547 "data_offset": 0, 00:14:19.547 "data_size": 0 00:14:19.547 }, 00:14:19.547 { 00:14:19.547 "name": "BaseBdev2", 00:14:19.547 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:19.547 "is_configured": false, 00:14:19.547 "data_offset": 0, 00:14:19.547 "data_size": 0 00:14:19.547 }, 00:14:19.547 { 00:14:19.547 "name": "BaseBdev3", 00:14:19.547 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:19.547 "is_configured": false, 00:14:19.547 "data_offset": 0, 00:14:19.547 "data_size": 0 00:14:19.547 }, 00:14:19.547 { 00:14:19.547 "name": "BaseBdev4", 00:14:19.547 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:19.547 "is_configured": false, 00:14:19.547 "data_offset": 0, 00:14:19.547 "data_size": 0 00:14:19.547 } 00:14:19.547 ] 00:14:19.547 }' 00:14:19.547 23:31:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:19.547 23:31:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:19.805 23:31:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:19.805 23:31:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:19.805 23:31:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:19.805 [2024-09-30 23:31:59.593142] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:19.805 [2024-09-30 23:31:59.593232] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring 00:14:19.805 23:31:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:19.805 23:31:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:14:19.805 23:31:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:19.805 23:31:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:19.805 [2024-09-30 23:31:59.605162] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:19.805 [2024-09-30 23:31:59.605229] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:19.805 [2024-09-30 23:31:59.605253] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:19.805 [2024-09-30 23:31:59.605275] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:19.805 [2024-09-30 23:31:59.605291] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:19.805 [2024-09-30 23:31:59.605311] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:19.805 [2024-09-30 23:31:59.605327] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:14:19.805 [2024-09-30 23:31:59.605346] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:14:19.805 23:31:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:19.805 23:31:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:14:19.805 23:31:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:19.805 23:31:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:19.805 [2024-09-30 23:31:59.631844] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:19.805 BaseBdev1 00:14:19.805 23:31:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:19.805 23:31:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:14:19.805 23:31:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:14:19.805 23:31:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:19.805 23:31:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:14:19.805 23:31:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:19.805 23:31:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:19.805 23:31:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:14:19.805 23:31:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:19.805 23:31:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:19.805 23:31:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:19.805 23:31:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:19.805 23:31:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:19.805 23:31:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:19.805 [ 00:14:19.805 { 00:14:19.805 "name": "BaseBdev1", 00:14:19.805 "aliases": [ 00:14:19.805 "1135a5ea-c334-4fe6-9c0c-182bee662e51" 00:14:19.805 ], 00:14:19.805 "product_name": "Malloc disk", 00:14:20.063 "block_size": 512, 00:14:20.063 "num_blocks": 65536, 00:14:20.063 "uuid": "1135a5ea-c334-4fe6-9c0c-182bee662e51", 00:14:20.063 "assigned_rate_limits": { 00:14:20.063 "rw_ios_per_sec": 0, 00:14:20.063 "rw_mbytes_per_sec": 0, 00:14:20.063 "r_mbytes_per_sec": 0, 00:14:20.063 "w_mbytes_per_sec": 0 00:14:20.063 }, 00:14:20.063 "claimed": true, 00:14:20.063 "claim_type": "exclusive_write", 00:14:20.063 "zoned": false, 00:14:20.063 "supported_io_types": { 00:14:20.063 "read": true, 00:14:20.063 "write": true, 00:14:20.063 "unmap": true, 00:14:20.063 "flush": true, 00:14:20.063 "reset": true, 00:14:20.063 "nvme_admin": false, 00:14:20.063 "nvme_io": false, 00:14:20.063 "nvme_io_md": false, 00:14:20.063 "write_zeroes": true, 00:14:20.063 "zcopy": true, 00:14:20.063 "get_zone_info": false, 00:14:20.063 "zone_management": false, 00:14:20.063 "zone_append": false, 00:14:20.063 "compare": false, 00:14:20.063 "compare_and_write": false, 00:14:20.063 "abort": true, 00:14:20.063 "seek_hole": false, 00:14:20.063 "seek_data": false, 00:14:20.063 "copy": true, 00:14:20.063 "nvme_iov_md": false 00:14:20.063 }, 00:14:20.063 "memory_domains": [ 00:14:20.063 { 00:14:20.063 "dma_device_id": "system", 00:14:20.063 "dma_device_type": 1 00:14:20.063 }, 00:14:20.063 { 00:14:20.063 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:20.063 "dma_device_type": 2 00:14:20.063 } 00:14:20.063 ], 00:14:20.063 "driver_specific": {} 00:14:20.063 } 00:14:20.063 ] 00:14:20.063 23:31:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:20.063 23:31:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:14:20.063 23:31:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:20.063 23:31:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:20.063 23:31:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:20.063 23:31:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:20.063 23:31:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:20.063 23:31:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:20.063 23:31:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:20.063 23:31:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:20.063 23:31:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:20.063 23:31:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:20.063 23:31:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:20.063 23:31:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:20.063 23:31:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:20.063 23:31:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:20.063 23:31:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:20.063 23:31:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:20.063 "name": "Existed_Raid", 00:14:20.063 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:20.063 "strip_size_kb": 64, 00:14:20.063 "state": "configuring", 00:14:20.063 "raid_level": "raid5f", 00:14:20.063 "superblock": false, 00:14:20.063 "num_base_bdevs": 4, 00:14:20.063 "num_base_bdevs_discovered": 1, 00:14:20.063 "num_base_bdevs_operational": 4, 00:14:20.063 "base_bdevs_list": [ 00:14:20.063 { 00:14:20.063 "name": "BaseBdev1", 00:14:20.063 "uuid": "1135a5ea-c334-4fe6-9c0c-182bee662e51", 00:14:20.063 "is_configured": true, 00:14:20.063 "data_offset": 0, 00:14:20.063 "data_size": 65536 00:14:20.063 }, 00:14:20.063 { 00:14:20.063 "name": "BaseBdev2", 00:14:20.063 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:20.063 "is_configured": false, 00:14:20.063 "data_offset": 0, 00:14:20.063 "data_size": 0 00:14:20.063 }, 00:14:20.063 { 00:14:20.063 "name": "BaseBdev3", 00:14:20.063 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:20.063 "is_configured": false, 00:14:20.063 "data_offset": 0, 00:14:20.063 "data_size": 0 00:14:20.063 }, 00:14:20.063 { 00:14:20.063 "name": "BaseBdev4", 00:14:20.063 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:20.063 "is_configured": false, 00:14:20.063 "data_offset": 0, 00:14:20.063 "data_size": 0 00:14:20.063 } 00:14:20.063 ] 00:14:20.063 }' 00:14:20.063 23:31:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:20.063 23:31:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:20.323 23:32:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:20.323 23:32:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:20.323 23:32:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:20.323 [2024-09-30 23:32:00.099175] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:20.323 [2024-09-30 23:32:00.099247] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring 00:14:20.323 23:32:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:20.323 23:32:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:14:20.323 23:32:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:20.323 23:32:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:20.323 [2024-09-30 23:32:00.111170] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:20.323 [2024-09-30 23:32:00.113018] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:20.323 [2024-09-30 23:32:00.113059] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:20.323 [2024-09-30 23:32:00.113069] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:20.323 [2024-09-30 23:32:00.113078] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:20.323 [2024-09-30 23:32:00.113085] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:14:20.323 [2024-09-30 23:32:00.113093] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:14:20.323 23:32:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:20.323 23:32:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:14:20.323 23:32:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:20.323 23:32:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:20.323 23:32:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:20.323 23:32:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:20.323 23:32:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:20.323 23:32:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:20.323 23:32:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:20.323 23:32:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:20.323 23:32:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:20.323 23:32:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:20.323 23:32:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:20.323 23:32:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:20.323 23:32:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:20.323 23:32:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:20.323 23:32:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:20.323 23:32:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:20.323 23:32:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:20.323 "name": "Existed_Raid", 00:14:20.323 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:20.323 "strip_size_kb": 64, 00:14:20.323 "state": "configuring", 00:14:20.323 "raid_level": "raid5f", 00:14:20.323 "superblock": false, 00:14:20.323 "num_base_bdevs": 4, 00:14:20.323 "num_base_bdevs_discovered": 1, 00:14:20.323 "num_base_bdevs_operational": 4, 00:14:20.323 "base_bdevs_list": [ 00:14:20.323 { 00:14:20.323 "name": "BaseBdev1", 00:14:20.323 "uuid": "1135a5ea-c334-4fe6-9c0c-182bee662e51", 00:14:20.323 "is_configured": true, 00:14:20.323 "data_offset": 0, 00:14:20.324 "data_size": 65536 00:14:20.324 }, 00:14:20.324 { 00:14:20.324 "name": "BaseBdev2", 00:14:20.324 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:20.324 "is_configured": false, 00:14:20.324 "data_offset": 0, 00:14:20.324 "data_size": 0 00:14:20.324 }, 00:14:20.324 { 00:14:20.324 "name": "BaseBdev3", 00:14:20.324 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:20.324 "is_configured": false, 00:14:20.324 "data_offset": 0, 00:14:20.324 "data_size": 0 00:14:20.324 }, 00:14:20.324 { 00:14:20.324 "name": "BaseBdev4", 00:14:20.324 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:20.324 "is_configured": false, 00:14:20.324 "data_offset": 0, 00:14:20.324 "data_size": 0 00:14:20.324 } 00:14:20.324 ] 00:14:20.324 }' 00:14:20.324 23:32:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:20.324 23:32:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:20.939 23:32:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:14:20.939 23:32:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:20.939 23:32:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:20.939 [2024-09-30 23:32:00.553585] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:20.939 BaseBdev2 00:14:20.939 23:32:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:20.939 23:32:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:14:20.939 23:32:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:14:20.939 23:32:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:20.939 23:32:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:14:20.939 23:32:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:20.939 23:32:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:20.939 23:32:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:14:20.939 23:32:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:20.939 23:32:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:20.939 23:32:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:20.939 23:32:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:20.939 23:32:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:20.939 23:32:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:20.939 [ 00:14:20.939 { 00:14:20.939 "name": "BaseBdev2", 00:14:20.939 "aliases": [ 00:14:20.939 "7795b023-100c-44c4-a6d4-fdce6eb2ce53" 00:14:20.939 ], 00:14:20.939 "product_name": "Malloc disk", 00:14:20.939 "block_size": 512, 00:14:20.939 "num_blocks": 65536, 00:14:20.939 "uuid": "7795b023-100c-44c4-a6d4-fdce6eb2ce53", 00:14:20.939 "assigned_rate_limits": { 00:14:20.939 "rw_ios_per_sec": 0, 00:14:20.939 "rw_mbytes_per_sec": 0, 00:14:20.939 "r_mbytes_per_sec": 0, 00:14:20.939 "w_mbytes_per_sec": 0 00:14:20.939 }, 00:14:20.939 "claimed": true, 00:14:20.939 "claim_type": "exclusive_write", 00:14:20.939 "zoned": false, 00:14:20.939 "supported_io_types": { 00:14:20.939 "read": true, 00:14:20.939 "write": true, 00:14:20.939 "unmap": true, 00:14:20.939 "flush": true, 00:14:20.939 "reset": true, 00:14:20.939 "nvme_admin": false, 00:14:20.939 "nvme_io": false, 00:14:20.939 "nvme_io_md": false, 00:14:20.939 "write_zeroes": true, 00:14:20.939 "zcopy": true, 00:14:20.939 "get_zone_info": false, 00:14:20.939 "zone_management": false, 00:14:20.939 "zone_append": false, 00:14:20.939 "compare": false, 00:14:20.939 "compare_and_write": false, 00:14:20.939 "abort": true, 00:14:20.939 "seek_hole": false, 00:14:20.939 "seek_data": false, 00:14:20.939 "copy": true, 00:14:20.939 "nvme_iov_md": false 00:14:20.939 }, 00:14:20.939 "memory_domains": [ 00:14:20.939 { 00:14:20.939 "dma_device_id": "system", 00:14:20.939 "dma_device_type": 1 00:14:20.939 }, 00:14:20.939 { 00:14:20.939 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:20.939 "dma_device_type": 2 00:14:20.939 } 00:14:20.939 ], 00:14:20.939 "driver_specific": {} 00:14:20.939 } 00:14:20.939 ] 00:14:20.939 23:32:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:20.939 23:32:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:14:20.939 23:32:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:20.939 23:32:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:20.939 23:32:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:20.939 23:32:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:20.939 23:32:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:20.939 23:32:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:20.939 23:32:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:20.939 23:32:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:20.939 23:32:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:20.939 23:32:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:20.939 23:32:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:20.939 23:32:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:20.939 23:32:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:20.939 23:32:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:20.939 23:32:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:20.939 23:32:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:20.939 23:32:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:20.939 23:32:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:20.939 "name": "Existed_Raid", 00:14:20.939 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:20.939 "strip_size_kb": 64, 00:14:20.939 "state": "configuring", 00:14:20.939 "raid_level": "raid5f", 00:14:20.939 "superblock": false, 00:14:20.939 "num_base_bdevs": 4, 00:14:20.939 "num_base_bdevs_discovered": 2, 00:14:20.939 "num_base_bdevs_operational": 4, 00:14:20.939 "base_bdevs_list": [ 00:14:20.939 { 00:14:20.939 "name": "BaseBdev1", 00:14:20.939 "uuid": "1135a5ea-c334-4fe6-9c0c-182bee662e51", 00:14:20.939 "is_configured": true, 00:14:20.939 "data_offset": 0, 00:14:20.939 "data_size": 65536 00:14:20.939 }, 00:14:20.939 { 00:14:20.939 "name": "BaseBdev2", 00:14:20.939 "uuid": "7795b023-100c-44c4-a6d4-fdce6eb2ce53", 00:14:20.939 "is_configured": true, 00:14:20.939 "data_offset": 0, 00:14:20.939 "data_size": 65536 00:14:20.939 }, 00:14:20.939 { 00:14:20.939 "name": "BaseBdev3", 00:14:20.939 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:20.939 "is_configured": false, 00:14:20.939 "data_offset": 0, 00:14:20.939 "data_size": 0 00:14:20.939 }, 00:14:20.939 { 00:14:20.939 "name": "BaseBdev4", 00:14:20.939 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:20.939 "is_configured": false, 00:14:20.939 "data_offset": 0, 00:14:20.939 "data_size": 0 00:14:20.939 } 00:14:20.940 ] 00:14:20.940 }' 00:14:20.940 23:32:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:20.940 23:32:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:21.206 23:32:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:14:21.206 23:32:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:21.206 23:32:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:21.206 BaseBdev3 00:14:21.206 [2024-09-30 23:32:01.043663] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:21.206 23:32:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:21.206 23:32:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:14:21.206 23:32:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:14:21.206 23:32:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:21.206 23:32:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:14:21.206 23:32:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:21.206 23:32:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:21.206 23:32:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:14:21.206 23:32:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:21.206 23:32:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:21.206 23:32:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:21.206 23:32:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:21.206 23:32:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:21.206 23:32:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:21.465 [ 00:14:21.465 { 00:14:21.465 "name": "BaseBdev3", 00:14:21.465 "aliases": [ 00:14:21.465 "29e07563-1826-462a-8720-a176d0f9439f" 00:14:21.465 ], 00:14:21.465 "product_name": "Malloc disk", 00:14:21.465 "block_size": 512, 00:14:21.465 "num_blocks": 65536, 00:14:21.465 "uuid": "29e07563-1826-462a-8720-a176d0f9439f", 00:14:21.465 "assigned_rate_limits": { 00:14:21.465 "rw_ios_per_sec": 0, 00:14:21.465 "rw_mbytes_per_sec": 0, 00:14:21.465 "r_mbytes_per_sec": 0, 00:14:21.465 "w_mbytes_per_sec": 0 00:14:21.465 }, 00:14:21.465 "claimed": true, 00:14:21.465 "claim_type": "exclusive_write", 00:14:21.465 "zoned": false, 00:14:21.465 "supported_io_types": { 00:14:21.465 "read": true, 00:14:21.465 "write": true, 00:14:21.465 "unmap": true, 00:14:21.465 "flush": true, 00:14:21.465 "reset": true, 00:14:21.465 "nvme_admin": false, 00:14:21.465 "nvme_io": false, 00:14:21.465 "nvme_io_md": false, 00:14:21.465 "write_zeroes": true, 00:14:21.465 "zcopy": true, 00:14:21.465 "get_zone_info": false, 00:14:21.465 "zone_management": false, 00:14:21.465 "zone_append": false, 00:14:21.465 "compare": false, 00:14:21.465 "compare_and_write": false, 00:14:21.465 "abort": true, 00:14:21.465 "seek_hole": false, 00:14:21.465 "seek_data": false, 00:14:21.465 "copy": true, 00:14:21.465 "nvme_iov_md": false 00:14:21.465 }, 00:14:21.465 "memory_domains": [ 00:14:21.465 { 00:14:21.465 "dma_device_id": "system", 00:14:21.465 "dma_device_type": 1 00:14:21.465 }, 00:14:21.465 { 00:14:21.465 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:21.465 "dma_device_type": 2 00:14:21.465 } 00:14:21.465 ], 00:14:21.465 "driver_specific": {} 00:14:21.465 } 00:14:21.465 ] 00:14:21.465 23:32:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:21.465 23:32:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:14:21.465 23:32:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:21.465 23:32:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:21.465 23:32:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:21.465 23:32:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:21.465 23:32:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:21.465 23:32:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:21.465 23:32:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:21.465 23:32:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:21.465 23:32:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:21.465 23:32:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:21.465 23:32:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:21.465 23:32:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:21.465 23:32:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:21.465 23:32:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:21.465 23:32:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:21.465 23:32:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:21.465 23:32:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:21.465 23:32:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:21.465 "name": "Existed_Raid", 00:14:21.465 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:21.465 "strip_size_kb": 64, 00:14:21.465 "state": "configuring", 00:14:21.465 "raid_level": "raid5f", 00:14:21.465 "superblock": false, 00:14:21.465 "num_base_bdevs": 4, 00:14:21.465 "num_base_bdevs_discovered": 3, 00:14:21.465 "num_base_bdevs_operational": 4, 00:14:21.465 "base_bdevs_list": [ 00:14:21.465 { 00:14:21.465 "name": "BaseBdev1", 00:14:21.465 "uuid": "1135a5ea-c334-4fe6-9c0c-182bee662e51", 00:14:21.465 "is_configured": true, 00:14:21.465 "data_offset": 0, 00:14:21.465 "data_size": 65536 00:14:21.465 }, 00:14:21.465 { 00:14:21.465 "name": "BaseBdev2", 00:14:21.465 "uuid": "7795b023-100c-44c4-a6d4-fdce6eb2ce53", 00:14:21.465 "is_configured": true, 00:14:21.465 "data_offset": 0, 00:14:21.465 "data_size": 65536 00:14:21.465 }, 00:14:21.465 { 00:14:21.465 "name": "BaseBdev3", 00:14:21.465 "uuid": "29e07563-1826-462a-8720-a176d0f9439f", 00:14:21.465 "is_configured": true, 00:14:21.465 "data_offset": 0, 00:14:21.465 "data_size": 65536 00:14:21.465 }, 00:14:21.465 { 00:14:21.465 "name": "BaseBdev4", 00:14:21.465 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:21.465 "is_configured": false, 00:14:21.465 "data_offset": 0, 00:14:21.465 "data_size": 0 00:14:21.465 } 00:14:21.465 ] 00:14:21.465 }' 00:14:21.465 23:32:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:21.465 23:32:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:21.724 23:32:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:14:21.724 23:32:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:21.724 23:32:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:21.724 [2024-09-30 23:32:01.549974] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:21.724 [2024-09-30 23:32:01.550114] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:14:21.724 [2024-09-30 23:32:01.550141] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:14:21.724 [2024-09-30 23:32:01.550465] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:14:21.724 [2024-09-30 23:32:01.550961] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:14:21.724 BaseBdev4 00:14:21.724 [2024-09-30 23:32:01.551012] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980 00:14:21.724 [2024-09-30 23:32:01.551225] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:21.724 23:32:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:21.724 23:32:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:14:21.724 23:32:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:14:21.724 23:32:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:21.724 23:32:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:14:21.724 23:32:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:21.724 23:32:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:21.724 23:32:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:14:21.724 23:32:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:21.724 23:32:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:21.724 23:32:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:21.724 23:32:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:14:21.724 23:32:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:21.724 23:32:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:21.724 [ 00:14:21.984 { 00:14:21.984 "name": "BaseBdev4", 00:14:21.984 "aliases": [ 00:14:21.984 "34c621e5-ab4d-46e1-8c29-c9295b80d1f5" 00:14:21.984 ], 00:14:21.984 "product_name": "Malloc disk", 00:14:21.984 "block_size": 512, 00:14:21.984 "num_blocks": 65536, 00:14:21.984 "uuid": "34c621e5-ab4d-46e1-8c29-c9295b80d1f5", 00:14:21.984 "assigned_rate_limits": { 00:14:21.984 "rw_ios_per_sec": 0, 00:14:21.984 "rw_mbytes_per_sec": 0, 00:14:21.984 "r_mbytes_per_sec": 0, 00:14:21.984 "w_mbytes_per_sec": 0 00:14:21.984 }, 00:14:21.984 "claimed": true, 00:14:21.984 "claim_type": "exclusive_write", 00:14:21.984 "zoned": false, 00:14:21.984 "supported_io_types": { 00:14:21.984 "read": true, 00:14:21.984 "write": true, 00:14:21.984 "unmap": true, 00:14:21.984 "flush": true, 00:14:21.984 "reset": true, 00:14:21.984 "nvme_admin": false, 00:14:21.984 "nvme_io": false, 00:14:21.984 "nvme_io_md": false, 00:14:21.984 "write_zeroes": true, 00:14:21.984 "zcopy": true, 00:14:21.984 "get_zone_info": false, 00:14:21.984 "zone_management": false, 00:14:21.984 "zone_append": false, 00:14:21.984 "compare": false, 00:14:21.984 "compare_and_write": false, 00:14:21.984 "abort": true, 00:14:21.984 "seek_hole": false, 00:14:21.984 "seek_data": false, 00:14:21.984 "copy": true, 00:14:21.984 "nvme_iov_md": false 00:14:21.984 }, 00:14:21.984 "memory_domains": [ 00:14:21.984 { 00:14:21.984 "dma_device_id": "system", 00:14:21.984 "dma_device_type": 1 00:14:21.984 }, 00:14:21.984 { 00:14:21.984 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:21.984 "dma_device_type": 2 00:14:21.984 } 00:14:21.984 ], 00:14:21.984 "driver_specific": {} 00:14:21.984 } 00:14:21.984 ] 00:14:21.984 23:32:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:21.984 23:32:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:14:21.984 23:32:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:21.984 23:32:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:21.984 23:32:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:14:21.984 23:32:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:21.984 23:32:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:21.984 23:32:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:21.984 23:32:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:21.984 23:32:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:21.984 23:32:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:21.984 23:32:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:21.984 23:32:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:21.984 23:32:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:21.984 23:32:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:21.984 23:32:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:21.984 23:32:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:21.984 23:32:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:21.984 23:32:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:21.984 23:32:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:21.984 "name": "Existed_Raid", 00:14:21.985 "uuid": "8707d23f-679b-4ec1-a8b8-95f5f360a9c6", 00:14:21.985 "strip_size_kb": 64, 00:14:21.985 "state": "online", 00:14:21.985 "raid_level": "raid5f", 00:14:21.985 "superblock": false, 00:14:21.985 "num_base_bdevs": 4, 00:14:21.985 "num_base_bdevs_discovered": 4, 00:14:21.985 "num_base_bdevs_operational": 4, 00:14:21.985 "base_bdevs_list": [ 00:14:21.985 { 00:14:21.985 "name": "BaseBdev1", 00:14:21.985 "uuid": "1135a5ea-c334-4fe6-9c0c-182bee662e51", 00:14:21.985 "is_configured": true, 00:14:21.985 "data_offset": 0, 00:14:21.985 "data_size": 65536 00:14:21.985 }, 00:14:21.985 { 00:14:21.985 "name": "BaseBdev2", 00:14:21.985 "uuid": "7795b023-100c-44c4-a6d4-fdce6eb2ce53", 00:14:21.985 "is_configured": true, 00:14:21.985 "data_offset": 0, 00:14:21.985 "data_size": 65536 00:14:21.985 }, 00:14:21.985 { 00:14:21.985 "name": "BaseBdev3", 00:14:21.985 "uuid": "29e07563-1826-462a-8720-a176d0f9439f", 00:14:21.985 "is_configured": true, 00:14:21.985 "data_offset": 0, 00:14:21.985 "data_size": 65536 00:14:21.985 }, 00:14:21.985 { 00:14:21.985 "name": "BaseBdev4", 00:14:21.985 "uuid": "34c621e5-ab4d-46e1-8c29-c9295b80d1f5", 00:14:21.985 "is_configured": true, 00:14:21.985 "data_offset": 0, 00:14:21.985 "data_size": 65536 00:14:21.985 } 00:14:21.985 ] 00:14:21.985 }' 00:14:21.985 23:32:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:21.985 23:32:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:22.244 23:32:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:14:22.244 23:32:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:14:22.244 23:32:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:22.244 23:32:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:22.244 23:32:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:14:22.244 23:32:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:22.244 23:32:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:22.244 23:32:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:14:22.244 23:32:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:22.244 23:32:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:22.244 [2024-09-30 23:32:01.997450] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:22.244 23:32:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:22.245 23:32:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:22.245 "name": "Existed_Raid", 00:14:22.245 "aliases": [ 00:14:22.245 "8707d23f-679b-4ec1-a8b8-95f5f360a9c6" 00:14:22.245 ], 00:14:22.245 "product_name": "Raid Volume", 00:14:22.245 "block_size": 512, 00:14:22.245 "num_blocks": 196608, 00:14:22.245 "uuid": "8707d23f-679b-4ec1-a8b8-95f5f360a9c6", 00:14:22.245 "assigned_rate_limits": { 00:14:22.245 "rw_ios_per_sec": 0, 00:14:22.245 "rw_mbytes_per_sec": 0, 00:14:22.245 "r_mbytes_per_sec": 0, 00:14:22.245 "w_mbytes_per_sec": 0 00:14:22.245 }, 00:14:22.245 "claimed": false, 00:14:22.245 "zoned": false, 00:14:22.245 "supported_io_types": { 00:14:22.245 "read": true, 00:14:22.245 "write": true, 00:14:22.245 "unmap": false, 00:14:22.245 "flush": false, 00:14:22.245 "reset": true, 00:14:22.245 "nvme_admin": false, 00:14:22.245 "nvme_io": false, 00:14:22.245 "nvme_io_md": false, 00:14:22.245 "write_zeroes": true, 00:14:22.245 "zcopy": false, 00:14:22.245 "get_zone_info": false, 00:14:22.245 "zone_management": false, 00:14:22.245 "zone_append": false, 00:14:22.245 "compare": false, 00:14:22.245 "compare_and_write": false, 00:14:22.245 "abort": false, 00:14:22.245 "seek_hole": false, 00:14:22.245 "seek_data": false, 00:14:22.245 "copy": false, 00:14:22.245 "nvme_iov_md": false 00:14:22.245 }, 00:14:22.245 "driver_specific": { 00:14:22.245 "raid": { 00:14:22.245 "uuid": "8707d23f-679b-4ec1-a8b8-95f5f360a9c6", 00:14:22.245 "strip_size_kb": 64, 00:14:22.245 "state": "online", 00:14:22.245 "raid_level": "raid5f", 00:14:22.245 "superblock": false, 00:14:22.245 "num_base_bdevs": 4, 00:14:22.245 "num_base_bdevs_discovered": 4, 00:14:22.245 "num_base_bdevs_operational": 4, 00:14:22.245 "base_bdevs_list": [ 00:14:22.245 { 00:14:22.245 "name": "BaseBdev1", 00:14:22.245 "uuid": "1135a5ea-c334-4fe6-9c0c-182bee662e51", 00:14:22.245 "is_configured": true, 00:14:22.245 "data_offset": 0, 00:14:22.245 "data_size": 65536 00:14:22.245 }, 00:14:22.245 { 00:14:22.245 "name": "BaseBdev2", 00:14:22.245 "uuid": "7795b023-100c-44c4-a6d4-fdce6eb2ce53", 00:14:22.245 "is_configured": true, 00:14:22.245 "data_offset": 0, 00:14:22.245 "data_size": 65536 00:14:22.245 }, 00:14:22.245 { 00:14:22.245 "name": "BaseBdev3", 00:14:22.245 "uuid": "29e07563-1826-462a-8720-a176d0f9439f", 00:14:22.245 "is_configured": true, 00:14:22.245 "data_offset": 0, 00:14:22.245 "data_size": 65536 00:14:22.245 }, 00:14:22.245 { 00:14:22.245 "name": "BaseBdev4", 00:14:22.245 "uuid": "34c621e5-ab4d-46e1-8c29-c9295b80d1f5", 00:14:22.245 "is_configured": true, 00:14:22.245 "data_offset": 0, 00:14:22.245 "data_size": 65536 00:14:22.245 } 00:14:22.245 ] 00:14:22.245 } 00:14:22.245 } 00:14:22.245 }' 00:14:22.245 23:32:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:22.245 23:32:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:14:22.245 BaseBdev2 00:14:22.245 BaseBdev3 00:14:22.245 BaseBdev4' 00:14:22.245 23:32:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:22.505 23:32:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:22.505 23:32:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:22.505 23:32:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:22.505 23:32:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:14:22.505 23:32:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:22.505 23:32:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:22.505 23:32:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:22.505 23:32:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:22.505 23:32:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:22.505 23:32:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:22.505 23:32:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:22.505 23:32:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:14:22.505 23:32:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:22.505 23:32:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:22.505 23:32:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:22.505 23:32:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:22.505 23:32:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:22.505 23:32:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:22.505 23:32:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:22.505 23:32:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:14:22.505 23:32:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:22.505 23:32:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:22.505 23:32:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:22.505 23:32:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:22.505 23:32:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:22.505 23:32:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:22.505 23:32:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:14:22.505 23:32:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:22.505 23:32:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:22.505 23:32:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:22.505 23:32:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:22.505 23:32:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:22.505 23:32:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:22.505 23:32:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:14:22.505 23:32:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:22.505 23:32:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:22.505 [2024-09-30 23:32:02.284805] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:22.505 23:32:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:22.505 23:32:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:14:22.505 23:32:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:14:22.505 23:32:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:14:22.505 23:32:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:14:22.505 23:32:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:14:22.505 23:32:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:14:22.505 23:32:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:22.505 23:32:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:22.505 23:32:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:22.505 23:32:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:22.505 23:32:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:22.505 23:32:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:22.505 23:32:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:22.505 23:32:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:22.505 23:32:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:22.506 23:32:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:22.506 23:32:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:22.506 23:32:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:22.506 23:32:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:22.506 23:32:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:22.506 23:32:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:22.506 "name": "Existed_Raid", 00:14:22.506 "uuid": "8707d23f-679b-4ec1-a8b8-95f5f360a9c6", 00:14:22.506 "strip_size_kb": 64, 00:14:22.506 "state": "online", 00:14:22.506 "raid_level": "raid5f", 00:14:22.506 "superblock": false, 00:14:22.506 "num_base_bdevs": 4, 00:14:22.506 "num_base_bdevs_discovered": 3, 00:14:22.506 "num_base_bdevs_operational": 3, 00:14:22.506 "base_bdevs_list": [ 00:14:22.506 { 00:14:22.506 "name": null, 00:14:22.506 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:22.506 "is_configured": false, 00:14:22.506 "data_offset": 0, 00:14:22.506 "data_size": 65536 00:14:22.506 }, 00:14:22.506 { 00:14:22.506 "name": "BaseBdev2", 00:14:22.506 "uuid": "7795b023-100c-44c4-a6d4-fdce6eb2ce53", 00:14:22.506 "is_configured": true, 00:14:22.506 "data_offset": 0, 00:14:22.506 "data_size": 65536 00:14:22.506 }, 00:14:22.506 { 00:14:22.506 "name": "BaseBdev3", 00:14:22.506 "uuid": "29e07563-1826-462a-8720-a176d0f9439f", 00:14:22.506 "is_configured": true, 00:14:22.506 "data_offset": 0, 00:14:22.506 "data_size": 65536 00:14:22.506 }, 00:14:22.506 { 00:14:22.506 "name": "BaseBdev4", 00:14:22.506 "uuid": "34c621e5-ab4d-46e1-8c29-c9295b80d1f5", 00:14:22.506 "is_configured": true, 00:14:22.506 "data_offset": 0, 00:14:22.506 "data_size": 65536 00:14:22.506 } 00:14:22.506 ] 00:14:22.506 }' 00:14:22.506 23:32:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:22.506 23:32:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:23.072 23:32:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:14:23.072 23:32:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:23.072 23:32:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:23.072 23:32:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:23.072 23:32:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.072 23:32:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:23.072 23:32:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.072 23:32:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:23.072 23:32:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:23.072 23:32:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:14:23.072 23:32:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.072 23:32:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:23.072 [2024-09-30 23:32:02.747334] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:23.072 [2024-09-30 23:32:02.747436] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:23.072 [2024-09-30 23:32:02.758606] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:23.072 23:32:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.072 23:32:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:23.072 23:32:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:23.072 23:32:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:23.072 23:32:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:23.072 23:32:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.072 23:32:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:23.072 23:32:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.072 23:32:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:23.072 23:32:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:23.072 23:32:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:14:23.072 23:32:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.072 23:32:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:23.072 [2024-09-30 23:32:02.818482] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:23.072 23:32:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.072 23:32:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:23.072 23:32:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:23.072 23:32:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:23.072 23:32:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:23.072 23:32:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.072 23:32:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:23.072 23:32:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.072 23:32:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:23.073 23:32:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:23.073 23:32:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:14:23.073 23:32:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.073 23:32:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:23.073 [2024-09-30 23:32:02.877113] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:14:23.073 [2024-09-30 23:32:02.877154] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline 00:14:23.073 23:32:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.073 23:32:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:23.073 23:32:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:23.073 23:32:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:14:23.073 23:32:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:23.073 23:32:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.073 23:32:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:23.073 23:32:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.332 23:32:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:14:23.332 23:32:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:14:23.332 23:32:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:14:23.332 23:32:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:14:23.332 23:32:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:23.332 23:32:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:14:23.332 23:32:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.332 23:32:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:23.332 BaseBdev2 00:14:23.332 23:32:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.332 23:32:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:14:23.332 23:32:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:14:23.332 23:32:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:23.332 23:32:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:14:23.332 23:32:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:23.332 23:32:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:23.332 23:32:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:14:23.332 23:32:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.332 23:32:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:23.332 23:32:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.332 23:32:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:23.332 23:32:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.332 23:32:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:23.332 [ 00:14:23.332 { 00:14:23.332 "name": "BaseBdev2", 00:14:23.332 "aliases": [ 00:14:23.332 "adac95c7-9502-4f5b-a0be-fb3eaebf288a" 00:14:23.332 ], 00:14:23.332 "product_name": "Malloc disk", 00:14:23.332 "block_size": 512, 00:14:23.332 "num_blocks": 65536, 00:14:23.332 "uuid": "adac95c7-9502-4f5b-a0be-fb3eaebf288a", 00:14:23.332 "assigned_rate_limits": { 00:14:23.332 "rw_ios_per_sec": 0, 00:14:23.332 "rw_mbytes_per_sec": 0, 00:14:23.332 "r_mbytes_per_sec": 0, 00:14:23.332 "w_mbytes_per_sec": 0 00:14:23.332 }, 00:14:23.332 "claimed": false, 00:14:23.332 "zoned": false, 00:14:23.332 "supported_io_types": { 00:14:23.332 "read": true, 00:14:23.332 "write": true, 00:14:23.332 "unmap": true, 00:14:23.332 "flush": true, 00:14:23.332 "reset": true, 00:14:23.332 "nvme_admin": false, 00:14:23.332 "nvme_io": false, 00:14:23.332 "nvme_io_md": false, 00:14:23.332 "write_zeroes": true, 00:14:23.332 "zcopy": true, 00:14:23.332 "get_zone_info": false, 00:14:23.332 "zone_management": false, 00:14:23.332 "zone_append": false, 00:14:23.332 "compare": false, 00:14:23.332 "compare_and_write": false, 00:14:23.332 "abort": true, 00:14:23.332 "seek_hole": false, 00:14:23.332 "seek_data": false, 00:14:23.332 "copy": true, 00:14:23.332 "nvme_iov_md": false 00:14:23.332 }, 00:14:23.332 "memory_domains": [ 00:14:23.332 { 00:14:23.332 "dma_device_id": "system", 00:14:23.332 "dma_device_type": 1 00:14:23.332 }, 00:14:23.332 { 00:14:23.332 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:23.332 "dma_device_type": 2 00:14:23.332 } 00:14:23.332 ], 00:14:23.332 "driver_specific": {} 00:14:23.332 } 00:14:23.332 ] 00:14:23.332 23:32:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.332 23:32:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:14:23.332 23:32:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:23.332 23:32:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:23.332 23:32:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:14:23.332 23:32:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.332 23:32:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:23.332 BaseBdev3 00:14:23.332 23:32:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.332 23:32:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:14:23.332 23:32:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:14:23.332 23:32:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:23.332 23:32:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:14:23.332 23:32:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:23.332 23:32:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:23.332 23:32:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:14:23.332 23:32:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.332 23:32:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:23.332 23:32:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.332 23:32:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:23.332 23:32:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.332 23:32:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:23.332 [ 00:14:23.332 { 00:14:23.332 "name": "BaseBdev3", 00:14:23.332 "aliases": [ 00:14:23.332 "b72108c5-cf2e-4a49-b784-761a6804f475" 00:14:23.332 ], 00:14:23.332 "product_name": "Malloc disk", 00:14:23.332 "block_size": 512, 00:14:23.332 "num_blocks": 65536, 00:14:23.332 "uuid": "b72108c5-cf2e-4a49-b784-761a6804f475", 00:14:23.332 "assigned_rate_limits": { 00:14:23.332 "rw_ios_per_sec": 0, 00:14:23.332 "rw_mbytes_per_sec": 0, 00:14:23.332 "r_mbytes_per_sec": 0, 00:14:23.332 "w_mbytes_per_sec": 0 00:14:23.332 }, 00:14:23.332 "claimed": false, 00:14:23.332 "zoned": false, 00:14:23.332 "supported_io_types": { 00:14:23.332 "read": true, 00:14:23.332 "write": true, 00:14:23.332 "unmap": true, 00:14:23.332 "flush": true, 00:14:23.332 "reset": true, 00:14:23.332 "nvme_admin": false, 00:14:23.332 "nvme_io": false, 00:14:23.332 "nvme_io_md": false, 00:14:23.332 "write_zeroes": true, 00:14:23.332 "zcopy": true, 00:14:23.332 "get_zone_info": false, 00:14:23.332 "zone_management": false, 00:14:23.332 "zone_append": false, 00:14:23.332 "compare": false, 00:14:23.332 "compare_and_write": false, 00:14:23.332 "abort": true, 00:14:23.332 "seek_hole": false, 00:14:23.332 "seek_data": false, 00:14:23.332 "copy": true, 00:14:23.332 "nvme_iov_md": false 00:14:23.332 }, 00:14:23.332 "memory_domains": [ 00:14:23.332 { 00:14:23.332 "dma_device_id": "system", 00:14:23.332 "dma_device_type": 1 00:14:23.332 }, 00:14:23.332 { 00:14:23.332 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:23.332 "dma_device_type": 2 00:14:23.332 } 00:14:23.332 ], 00:14:23.332 "driver_specific": {} 00:14:23.332 } 00:14:23.332 ] 00:14:23.332 23:32:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.332 23:32:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:14:23.332 23:32:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:23.332 23:32:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:23.332 23:32:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:14:23.332 23:32:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.332 23:32:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:23.332 BaseBdev4 00:14:23.332 23:32:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.332 23:32:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:14:23.332 23:32:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:14:23.332 23:32:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:23.332 23:32:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:14:23.332 23:32:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:23.332 23:32:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:23.332 23:32:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:14:23.332 23:32:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.332 23:32:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:23.332 23:32:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.332 23:32:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:14:23.332 23:32:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.332 23:32:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:23.332 [ 00:14:23.332 { 00:14:23.332 "name": "BaseBdev4", 00:14:23.332 "aliases": [ 00:14:23.332 "23fc04c3-9c8d-4245-a5b5-508b0a011cd2" 00:14:23.332 ], 00:14:23.332 "product_name": "Malloc disk", 00:14:23.332 "block_size": 512, 00:14:23.332 "num_blocks": 65536, 00:14:23.332 "uuid": "23fc04c3-9c8d-4245-a5b5-508b0a011cd2", 00:14:23.332 "assigned_rate_limits": { 00:14:23.332 "rw_ios_per_sec": 0, 00:14:23.332 "rw_mbytes_per_sec": 0, 00:14:23.332 "r_mbytes_per_sec": 0, 00:14:23.332 "w_mbytes_per_sec": 0 00:14:23.332 }, 00:14:23.332 "claimed": false, 00:14:23.332 "zoned": false, 00:14:23.332 "supported_io_types": { 00:14:23.332 "read": true, 00:14:23.332 "write": true, 00:14:23.332 "unmap": true, 00:14:23.332 "flush": true, 00:14:23.332 "reset": true, 00:14:23.332 "nvme_admin": false, 00:14:23.332 "nvme_io": false, 00:14:23.332 "nvme_io_md": false, 00:14:23.332 "write_zeroes": true, 00:14:23.332 "zcopy": true, 00:14:23.332 "get_zone_info": false, 00:14:23.332 "zone_management": false, 00:14:23.332 "zone_append": false, 00:14:23.332 "compare": false, 00:14:23.332 "compare_and_write": false, 00:14:23.332 "abort": true, 00:14:23.332 "seek_hole": false, 00:14:23.332 "seek_data": false, 00:14:23.332 "copy": true, 00:14:23.332 "nvme_iov_md": false 00:14:23.332 }, 00:14:23.332 "memory_domains": [ 00:14:23.332 { 00:14:23.332 "dma_device_id": "system", 00:14:23.332 "dma_device_type": 1 00:14:23.332 }, 00:14:23.332 { 00:14:23.332 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:23.332 "dma_device_type": 2 00:14:23.332 } 00:14:23.332 ], 00:14:23.332 "driver_specific": {} 00:14:23.332 } 00:14:23.332 ] 00:14:23.332 23:32:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.332 23:32:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:14:23.332 23:32:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:23.332 23:32:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:23.332 23:32:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:14:23.332 23:32:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.332 23:32:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:23.332 [2024-09-30 23:32:03.107781] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:23.332 [2024-09-30 23:32:03.107925] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:23.332 [2024-09-30 23:32:03.107969] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:23.332 [2024-09-30 23:32:03.109737] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:23.332 [2024-09-30 23:32:03.109837] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:23.332 23:32:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.332 23:32:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:23.332 23:32:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:23.332 23:32:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:23.332 23:32:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:23.332 23:32:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:23.332 23:32:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:23.332 23:32:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:23.332 23:32:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:23.332 23:32:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:23.332 23:32:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:23.332 23:32:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:23.332 23:32:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:23.332 23:32:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.332 23:32:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:23.332 23:32:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.332 23:32:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:23.332 "name": "Existed_Raid", 00:14:23.332 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:23.332 "strip_size_kb": 64, 00:14:23.332 "state": "configuring", 00:14:23.332 "raid_level": "raid5f", 00:14:23.332 "superblock": false, 00:14:23.332 "num_base_bdevs": 4, 00:14:23.332 "num_base_bdevs_discovered": 3, 00:14:23.332 "num_base_bdevs_operational": 4, 00:14:23.332 "base_bdevs_list": [ 00:14:23.332 { 00:14:23.332 "name": "BaseBdev1", 00:14:23.332 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:23.332 "is_configured": false, 00:14:23.332 "data_offset": 0, 00:14:23.332 "data_size": 0 00:14:23.332 }, 00:14:23.332 { 00:14:23.332 "name": "BaseBdev2", 00:14:23.332 "uuid": "adac95c7-9502-4f5b-a0be-fb3eaebf288a", 00:14:23.332 "is_configured": true, 00:14:23.332 "data_offset": 0, 00:14:23.332 "data_size": 65536 00:14:23.332 }, 00:14:23.332 { 00:14:23.332 "name": "BaseBdev3", 00:14:23.332 "uuid": "b72108c5-cf2e-4a49-b784-761a6804f475", 00:14:23.332 "is_configured": true, 00:14:23.332 "data_offset": 0, 00:14:23.332 "data_size": 65536 00:14:23.332 }, 00:14:23.332 { 00:14:23.332 "name": "BaseBdev4", 00:14:23.332 "uuid": "23fc04c3-9c8d-4245-a5b5-508b0a011cd2", 00:14:23.332 "is_configured": true, 00:14:23.332 "data_offset": 0, 00:14:23.332 "data_size": 65536 00:14:23.332 } 00:14:23.332 ] 00:14:23.332 }' 00:14:23.332 23:32:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:23.332 23:32:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:23.900 23:32:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:14:23.901 23:32:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.901 23:32:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:23.901 [2024-09-30 23:32:03.582979] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:23.901 23:32:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.901 23:32:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:23.901 23:32:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:23.901 23:32:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:23.901 23:32:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:23.901 23:32:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:23.901 23:32:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:23.901 23:32:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:23.901 23:32:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:23.901 23:32:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:23.901 23:32:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:23.901 23:32:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:23.901 23:32:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.901 23:32:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:23.901 23:32:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:23.901 23:32:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.901 23:32:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:23.901 "name": "Existed_Raid", 00:14:23.901 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:23.901 "strip_size_kb": 64, 00:14:23.901 "state": "configuring", 00:14:23.901 "raid_level": "raid5f", 00:14:23.901 "superblock": false, 00:14:23.901 "num_base_bdevs": 4, 00:14:23.901 "num_base_bdevs_discovered": 2, 00:14:23.901 "num_base_bdevs_operational": 4, 00:14:23.901 "base_bdevs_list": [ 00:14:23.901 { 00:14:23.901 "name": "BaseBdev1", 00:14:23.901 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:23.901 "is_configured": false, 00:14:23.901 "data_offset": 0, 00:14:23.901 "data_size": 0 00:14:23.901 }, 00:14:23.901 { 00:14:23.901 "name": null, 00:14:23.901 "uuid": "adac95c7-9502-4f5b-a0be-fb3eaebf288a", 00:14:23.901 "is_configured": false, 00:14:23.901 "data_offset": 0, 00:14:23.901 "data_size": 65536 00:14:23.901 }, 00:14:23.901 { 00:14:23.901 "name": "BaseBdev3", 00:14:23.901 "uuid": "b72108c5-cf2e-4a49-b784-761a6804f475", 00:14:23.901 "is_configured": true, 00:14:23.901 "data_offset": 0, 00:14:23.901 "data_size": 65536 00:14:23.901 }, 00:14:23.901 { 00:14:23.901 "name": "BaseBdev4", 00:14:23.901 "uuid": "23fc04c3-9c8d-4245-a5b5-508b0a011cd2", 00:14:23.901 "is_configured": true, 00:14:23.901 "data_offset": 0, 00:14:23.901 "data_size": 65536 00:14:23.901 } 00:14:23.901 ] 00:14:23.901 }' 00:14:23.901 23:32:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:23.901 23:32:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:24.160 23:32:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:24.160 23:32:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:24.160 23:32:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:24.160 23:32:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:24.160 23:32:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:24.160 23:32:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:14:24.160 23:32:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:14:24.160 23:32:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:24.160 23:32:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:24.160 [2024-09-30 23:32:04.009230] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:24.160 BaseBdev1 00:14:24.160 23:32:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:24.160 23:32:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:14:24.160 23:32:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:14:24.160 23:32:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:24.420 23:32:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:14:24.420 23:32:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:24.420 23:32:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:24.420 23:32:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:14:24.420 23:32:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:24.420 23:32:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:24.420 23:32:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:24.420 23:32:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:24.420 23:32:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:24.420 23:32:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:24.420 [ 00:14:24.420 { 00:14:24.420 "name": "BaseBdev1", 00:14:24.420 "aliases": [ 00:14:24.420 "7e41468b-d323-4727-bd0e-a55ef2c6847d" 00:14:24.420 ], 00:14:24.420 "product_name": "Malloc disk", 00:14:24.420 "block_size": 512, 00:14:24.420 "num_blocks": 65536, 00:14:24.420 "uuid": "7e41468b-d323-4727-bd0e-a55ef2c6847d", 00:14:24.420 "assigned_rate_limits": { 00:14:24.420 "rw_ios_per_sec": 0, 00:14:24.420 "rw_mbytes_per_sec": 0, 00:14:24.420 "r_mbytes_per_sec": 0, 00:14:24.420 "w_mbytes_per_sec": 0 00:14:24.420 }, 00:14:24.420 "claimed": true, 00:14:24.420 "claim_type": "exclusive_write", 00:14:24.420 "zoned": false, 00:14:24.420 "supported_io_types": { 00:14:24.420 "read": true, 00:14:24.420 "write": true, 00:14:24.420 "unmap": true, 00:14:24.420 "flush": true, 00:14:24.420 "reset": true, 00:14:24.420 "nvme_admin": false, 00:14:24.420 "nvme_io": false, 00:14:24.420 "nvme_io_md": false, 00:14:24.420 "write_zeroes": true, 00:14:24.420 "zcopy": true, 00:14:24.420 "get_zone_info": false, 00:14:24.420 "zone_management": false, 00:14:24.420 "zone_append": false, 00:14:24.420 "compare": false, 00:14:24.420 "compare_and_write": false, 00:14:24.420 "abort": true, 00:14:24.420 "seek_hole": false, 00:14:24.420 "seek_data": false, 00:14:24.420 "copy": true, 00:14:24.420 "nvme_iov_md": false 00:14:24.420 }, 00:14:24.420 "memory_domains": [ 00:14:24.420 { 00:14:24.420 "dma_device_id": "system", 00:14:24.420 "dma_device_type": 1 00:14:24.420 }, 00:14:24.420 { 00:14:24.420 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:24.420 "dma_device_type": 2 00:14:24.420 } 00:14:24.420 ], 00:14:24.420 "driver_specific": {} 00:14:24.420 } 00:14:24.420 ] 00:14:24.420 23:32:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:24.420 23:32:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:14:24.420 23:32:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:24.420 23:32:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:24.420 23:32:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:24.420 23:32:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:24.420 23:32:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:24.420 23:32:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:24.420 23:32:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:24.420 23:32:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:24.420 23:32:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:24.420 23:32:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:24.420 23:32:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:24.420 23:32:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:24.420 23:32:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:24.420 23:32:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:24.420 23:32:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:24.420 23:32:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:24.420 "name": "Existed_Raid", 00:14:24.420 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:24.420 "strip_size_kb": 64, 00:14:24.420 "state": "configuring", 00:14:24.420 "raid_level": "raid5f", 00:14:24.420 "superblock": false, 00:14:24.420 "num_base_bdevs": 4, 00:14:24.420 "num_base_bdevs_discovered": 3, 00:14:24.420 "num_base_bdevs_operational": 4, 00:14:24.420 "base_bdevs_list": [ 00:14:24.420 { 00:14:24.420 "name": "BaseBdev1", 00:14:24.420 "uuid": "7e41468b-d323-4727-bd0e-a55ef2c6847d", 00:14:24.420 "is_configured": true, 00:14:24.420 "data_offset": 0, 00:14:24.420 "data_size": 65536 00:14:24.420 }, 00:14:24.420 { 00:14:24.420 "name": null, 00:14:24.420 "uuid": "adac95c7-9502-4f5b-a0be-fb3eaebf288a", 00:14:24.420 "is_configured": false, 00:14:24.420 "data_offset": 0, 00:14:24.420 "data_size": 65536 00:14:24.420 }, 00:14:24.420 { 00:14:24.420 "name": "BaseBdev3", 00:14:24.420 "uuid": "b72108c5-cf2e-4a49-b784-761a6804f475", 00:14:24.420 "is_configured": true, 00:14:24.420 "data_offset": 0, 00:14:24.420 "data_size": 65536 00:14:24.420 }, 00:14:24.420 { 00:14:24.420 "name": "BaseBdev4", 00:14:24.420 "uuid": "23fc04c3-9c8d-4245-a5b5-508b0a011cd2", 00:14:24.420 "is_configured": true, 00:14:24.420 "data_offset": 0, 00:14:24.420 "data_size": 65536 00:14:24.420 } 00:14:24.420 ] 00:14:24.420 }' 00:14:24.420 23:32:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:24.420 23:32:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:24.680 23:32:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:24.680 23:32:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:24.680 23:32:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:24.680 23:32:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:24.680 23:32:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:24.680 23:32:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:14:24.938 23:32:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:14:24.938 23:32:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:24.938 23:32:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:24.938 [2024-09-30 23:32:04.540396] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:24.938 23:32:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:24.938 23:32:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:24.938 23:32:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:24.938 23:32:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:24.938 23:32:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:24.939 23:32:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:24.939 23:32:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:24.939 23:32:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:24.939 23:32:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:24.939 23:32:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:24.939 23:32:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:24.939 23:32:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:24.939 23:32:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:24.939 23:32:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:24.939 23:32:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:24.939 23:32:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:24.939 23:32:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:24.939 "name": "Existed_Raid", 00:14:24.939 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:24.939 "strip_size_kb": 64, 00:14:24.939 "state": "configuring", 00:14:24.939 "raid_level": "raid5f", 00:14:24.939 "superblock": false, 00:14:24.939 "num_base_bdevs": 4, 00:14:24.939 "num_base_bdevs_discovered": 2, 00:14:24.939 "num_base_bdevs_operational": 4, 00:14:24.939 "base_bdevs_list": [ 00:14:24.939 { 00:14:24.939 "name": "BaseBdev1", 00:14:24.939 "uuid": "7e41468b-d323-4727-bd0e-a55ef2c6847d", 00:14:24.939 "is_configured": true, 00:14:24.939 "data_offset": 0, 00:14:24.939 "data_size": 65536 00:14:24.939 }, 00:14:24.939 { 00:14:24.939 "name": null, 00:14:24.939 "uuid": "adac95c7-9502-4f5b-a0be-fb3eaebf288a", 00:14:24.939 "is_configured": false, 00:14:24.939 "data_offset": 0, 00:14:24.939 "data_size": 65536 00:14:24.939 }, 00:14:24.939 { 00:14:24.939 "name": null, 00:14:24.939 "uuid": "b72108c5-cf2e-4a49-b784-761a6804f475", 00:14:24.939 "is_configured": false, 00:14:24.939 "data_offset": 0, 00:14:24.939 "data_size": 65536 00:14:24.939 }, 00:14:24.939 { 00:14:24.939 "name": "BaseBdev4", 00:14:24.939 "uuid": "23fc04c3-9c8d-4245-a5b5-508b0a011cd2", 00:14:24.939 "is_configured": true, 00:14:24.939 "data_offset": 0, 00:14:24.939 "data_size": 65536 00:14:24.939 } 00:14:24.939 ] 00:14:24.939 }' 00:14:24.939 23:32:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:24.939 23:32:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:25.198 23:32:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:25.198 23:32:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:25.198 23:32:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:25.198 23:32:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:25.198 23:32:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:25.198 23:32:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:14:25.198 23:32:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:14:25.198 23:32:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:25.198 23:32:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:25.198 [2024-09-30 23:32:05.043586] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:25.198 23:32:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:25.198 23:32:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:25.199 23:32:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:25.199 23:32:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:25.199 23:32:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:25.199 23:32:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:25.199 23:32:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:25.199 23:32:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:25.199 23:32:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:25.199 23:32:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:25.199 23:32:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:25.458 23:32:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:25.458 23:32:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:25.458 23:32:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:25.458 23:32:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:25.458 23:32:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:25.458 23:32:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:25.458 "name": "Existed_Raid", 00:14:25.458 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:25.458 "strip_size_kb": 64, 00:14:25.458 "state": "configuring", 00:14:25.458 "raid_level": "raid5f", 00:14:25.458 "superblock": false, 00:14:25.458 "num_base_bdevs": 4, 00:14:25.458 "num_base_bdevs_discovered": 3, 00:14:25.458 "num_base_bdevs_operational": 4, 00:14:25.458 "base_bdevs_list": [ 00:14:25.458 { 00:14:25.458 "name": "BaseBdev1", 00:14:25.458 "uuid": "7e41468b-d323-4727-bd0e-a55ef2c6847d", 00:14:25.458 "is_configured": true, 00:14:25.458 "data_offset": 0, 00:14:25.458 "data_size": 65536 00:14:25.458 }, 00:14:25.458 { 00:14:25.458 "name": null, 00:14:25.458 "uuid": "adac95c7-9502-4f5b-a0be-fb3eaebf288a", 00:14:25.458 "is_configured": false, 00:14:25.458 "data_offset": 0, 00:14:25.458 "data_size": 65536 00:14:25.458 }, 00:14:25.458 { 00:14:25.458 "name": "BaseBdev3", 00:14:25.458 "uuid": "b72108c5-cf2e-4a49-b784-761a6804f475", 00:14:25.458 "is_configured": true, 00:14:25.458 "data_offset": 0, 00:14:25.458 "data_size": 65536 00:14:25.458 }, 00:14:25.458 { 00:14:25.458 "name": "BaseBdev4", 00:14:25.458 "uuid": "23fc04c3-9c8d-4245-a5b5-508b0a011cd2", 00:14:25.458 "is_configured": true, 00:14:25.458 "data_offset": 0, 00:14:25.458 "data_size": 65536 00:14:25.458 } 00:14:25.458 ] 00:14:25.459 }' 00:14:25.459 23:32:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:25.459 23:32:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:25.718 23:32:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:25.718 23:32:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:25.718 23:32:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:25.718 23:32:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:25.718 23:32:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:25.718 23:32:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:14:25.718 23:32:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:14:25.718 23:32:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:25.718 23:32:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:25.718 [2024-09-30 23:32:05.538746] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:25.718 23:32:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:25.718 23:32:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:25.718 23:32:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:25.718 23:32:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:25.718 23:32:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:25.718 23:32:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:25.718 23:32:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:25.718 23:32:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:25.718 23:32:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:25.718 23:32:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:25.718 23:32:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:25.718 23:32:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:25.718 23:32:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:25.718 23:32:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:25.718 23:32:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:25.978 23:32:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:25.978 23:32:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:25.978 "name": "Existed_Raid", 00:14:25.978 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:25.978 "strip_size_kb": 64, 00:14:25.978 "state": "configuring", 00:14:25.978 "raid_level": "raid5f", 00:14:25.978 "superblock": false, 00:14:25.978 "num_base_bdevs": 4, 00:14:25.978 "num_base_bdevs_discovered": 2, 00:14:25.978 "num_base_bdevs_operational": 4, 00:14:25.978 "base_bdevs_list": [ 00:14:25.978 { 00:14:25.978 "name": null, 00:14:25.978 "uuid": "7e41468b-d323-4727-bd0e-a55ef2c6847d", 00:14:25.978 "is_configured": false, 00:14:25.978 "data_offset": 0, 00:14:25.978 "data_size": 65536 00:14:25.978 }, 00:14:25.978 { 00:14:25.978 "name": null, 00:14:25.978 "uuid": "adac95c7-9502-4f5b-a0be-fb3eaebf288a", 00:14:25.978 "is_configured": false, 00:14:25.978 "data_offset": 0, 00:14:25.978 "data_size": 65536 00:14:25.978 }, 00:14:25.978 { 00:14:25.978 "name": "BaseBdev3", 00:14:25.978 "uuid": "b72108c5-cf2e-4a49-b784-761a6804f475", 00:14:25.978 "is_configured": true, 00:14:25.978 "data_offset": 0, 00:14:25.978 "data_size": 65536 00:14:25.978 }, 00:14:25.978 { 00:14:25.978 "name": "BaseBdev4", 00:14:25.978 "uuid": "23fc04c3-9c8d-4245-a5b5-508b0a011cd2", 00:14:25.978 "is_configured": true, 00:14:25.978 "data_offset": 0, 00:14:25.978 "data_size": 65536 00:14:25.978 } 00:14:25.978 ] 00:14:25.978 }' 00:14:25.978 23:32:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:25.978 23:32:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:26.238 23:32:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:26.238 23:32:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:26.238 23:32:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:26.238 23:32:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:26.238 23:32:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:26.238 23:32:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:14:26.238 23:32:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:14:26.238 23:32:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:26.238 23:32:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:26.238 [2024-09-30 23:32:06.024411] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:26.238 23:32:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:26.238 23:32:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:26.238 23:32:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:26.238 23:32:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:26.238 23:32:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:26.238 23:32:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:26.238 23:32:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:26.238 23:32:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:26.238 23:32:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:26.238 23:32:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:26.238 23:32:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:26.238 23:32:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:26.238 23:32:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:26.238 23:32:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:26.238 23:32:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:26.238 23:32:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:26.238 23:32:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:26.238 "name": "Existed_Raid", 00:14:26.238 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:26.238 "strip_size_kb": 64, 00:14:26.238 "state": "configuring", 00:14:26.238 "raid_level": "raid5f", 00:14:26.238 "superblock": false, 00:14:26.238 "num_base_bdevs": 4, 00:14:26.238 "num_base_bdevs_discovered": 3, 00:14:26.238 "num_base_bdevs_operational": 4, 00:14:26.238 "base_bdevs_list": [ 00:14:26.238 { 00:14:26.238 "name": null, 00:14:26.238 "uuid": "7e41468b-d323-4727-bd0e-a55ef2c6847d", 00:14:26.238 "is_configured": false, 00:14:26.238 "data_offset": 0, 00:14:26.238 "data_size": 65536 00:14:26.238 }, 00:14:26.238 { 00:14:26.238 "name": "BaseBdev2", 00:14:26.238 "uuid": "adac95c7-9502-4f5b-a0be-fb3eaebf288a", 00:14:26.238 "is_configured": true, 00:14:26.238 "data_offset": 0, 00:14:26.238 "data_size": 65536 00:14:26.238 }, 00:14:26.238 { 00:14:26.238 "name": "BaseBdev3", 00:14:26.238 "uuid": "b72108c5-cf2e-4a49-b784-761a6804f475", 00:14:26.238 "is_configured": true, 00:14:26.238 "data_offset": 0, 00:14:26.238 "data_size": 65536 00:14:26.238 }, 00:14:26.238 { 00:14:26.238 "name": "BaseBdev4", 00:14:26.238 "uuid": "23fc04c3-9c8d-4245-a5b5-508b0a011cd2", 00:14:26.238 "is_configured": true, 00:14:26.238 "data_offset": 0, 00:14:26.238 "data_size": 65536 00:14:26.238 } 00:14:26.238 ] 00:14:26.238 }' 00:14:26.238 23:32:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:26.238 23:32:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:26.809 23:32:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:26.809 23:32:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:26.809 23:32:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:26.809 23:32:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:26.809 23:32:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:26.809 23:32:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:14:26.809 23:32:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:26.809 23:32:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:26.809 23:32:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:26.809 23:32:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:14:26.809 23:32:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:26.809 23:32:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 7e41468b-d323-4727-bd0e-a55ef2c6847d 00:14:26.809 23:32:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:26.809 23:32:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:26.809 [2024-09-30 23:32:06.610347] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:14:26.809 [2024-09-30 23:32:06.610471] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:14:26.809 [2024-09-30 23:32:06.610495] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:14:26.809 [2024-09-30 23:32:06.610768] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:14:26.809 [2024-09-30 23:32:06.611234] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:14:26.809 [2024-09-30 23:32:06.611285] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006d00 00:14:26.809 [2024-09-30 23:32:06.611501] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:26.809 NewBaseBdev 00:14:26.809 23:32:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:26.809 23:32:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:14:26.809 23:32:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:14:26.809 23:32:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:26.809 23:32:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:14:26.809 23:32:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:26.809 23:32:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:26.809 23:32:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:14:26.809 23:32:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:26.809 23:32:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:26.809 23:32:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:26.809 23:32:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:14:26.809 23:32:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:26.809 23:32:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:26.809 [ 00:14:26.809 { 00:14:26.809 "name": "NewBaseBdev", 00:14:26.809 "aliases": [ 00:14:26.809 "7e41468b-d323-4727-bd0e-a55ef2c6847d" 00:14:26.809 ], 00:14:26.809 "product_name": "Malloc disk", 00:14:26.809 "block_size": 512, 00:14:26.809 "num_blocks": 65536, 00:14:26.809 "uuid": "7e41468b-d323-4727-bd0e-a55ef2c6847d", 00:14:26.809 "assigned_rate_limits": { 00:14:26.809 "rw_ios_per_sec": 0, 00:14:26.809 "rw_mbytes_per_sec": 0, 00:14:26.809 "r_mbytes_per_sec": 0, 00:14:26.809 "w_mbytes_per_sec": 0 00:14:26.809 }, 00:14:26.809 "claimed": true, 00:14:26.809 "claim_type": "exclusive_write", 00:14:26.809 "zoned": false, 00:14:26.809 "supported_io_types": { 00:14:26.809 "read": true, 00:14:26.809 "write": true, 00:14:26.809 "unmap": true, 00:14:26.809 "flush": true, 00:14:26.809 "reset": true, 00:14:26.809 "nvme_admin": false, 00:14:26.809 "nvme_io": false, 00:14:26.809 "nvme_io_md": false, 00:14:26.809 "write_zeroes": true, 00:14:26.809 "zcopy": true, 00:14:26.809 "get_zone_info": false, 00:14:26.809 "zone_management": false, 00:14:26.809 "zone_append": false, 00:14:26.809 "compare": false, 00:14:26.809 "compare_and_write": false, 00:14:26.809 "abort": true, 00:14:26.809 "seek_hole": false, 00:14:26.809 "seek_data": false, 00:14:26.809 "copy": true, 00:14:26.809 "nvme_iov_md": false 00:14:26.809 }, 00:14:26.809 "memory_domains": [ 00:14:26.809 { 00:14:26.809 "dma_device_id": "system", 00:14:26.809 "dma_device_type": 1 00:14:26.809 }, 00:14:26.809 { 00:14:26.809 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:26.809 "dma_device_type": 2 00:14:26.809 } 00:14:26.809 ], 00:14:26.809 "driver_specific": {} 00:14:26.809 } 00:14:26.809 ] 00:14:26.809 23:32:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:26.809 23:32:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:14:26.809 23:32:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:14:26.809 23:32:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:26.809 23:32:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:26.809 23:32:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:26.809 23:32:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:26.809 23:32:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:26.809 23:32:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:26.809 23:32:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:26.809 23:32:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:26.809 23:32:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:26.809 23:32:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:26.809 23:32:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:26.809 23:32:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:26.809 23:32:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:27.069 23:32:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:27.069 23:32:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:27.069 "name": "Existed_Raid", 00:14:27.069 "uuid": "13353e5d-c783-4a72-824d-ee187f765225", 00:14:27.069 "strip_size_kb": 64, 00:14:27.069 "state": "online", 00:14:27.069 "raid_level": "raid5f", 00:14:27.069 "superblock": false, 00:14:27.069 "num_base_bdevs": 4, 00:14:27.069 "num_base_bdevs_discovered": 4, 00:14:27.069 "num_base_bdevs_operational": 4, 00:14:27.069 "base_bdevs_list": [ 00:14:27.069 { 00:14:27.069 "name": "NewBaseBdev", 00:14:27.069 "uuid": "7e41468b-d323-4727-bd0e-a55ef2c6847d", 00:14:27.069 "is_configured": true, 00:14:27.069 "data_offset": 0, 00:14:27.069 "data_size": 65536 00:14:27.069 }, 00:14:27.069 { 00:14:27.069 "name": "BaseBdev2", 00:14:27.069 "uuid": "adac95c7-9502-4f5b-a0be-fb3eaebf288a", 00:14:27.069 "is_configured": true, 00:14:27.069 "data_offset": 0, 00:14:27.069 "data_size": 65536 00:14:27.069 }, 00:14:27.069 { 00:14:27.069 "name": "BaseBdev3", 00:14:27.069 "uuid": "b72108c5-cf2e-4a49-b784-761a6804f475", 00:14:27.069 "is_configured": true, 00:14:27.069 "data_offset": 0, 00:14:27.069 "data_size": 65536 00:14:27.069 }, 00:14:27.069 { 00:14:27.069 "name": "BaseBdev4", 00:14:27.069 "uuid": "23fc04c3-9c8d-4245-a5b5-508b0a011cd2", 00:14:27.069 "is_configured": true, 00:14:27.069 "data_offset": 0, 00:14:27.069 "data_size": 65536 00:14:27.069 } 00:14:27.069 ] 00:14:27.069 }' 00:14:27.069 23:32:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:27.069 23:32:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:27.329 23:32:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:14:27.329 23:32:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:14:27.329 23:32:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:27.329 23:32:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:27.329 23:32:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:14:27.329 23:32:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:27.329 23:32:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:14:27.329 23:32:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:27.329 23:32:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:27.329 23:32:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:27.329 [2024-09-30 23:32:07.085768] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:27.329 23:32:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:27.329 23:32:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:27.329 "name": "Existed_Raid", 00:14:27.329 "aliases": [ 00:14:27.329 "13353e5d-c783-4a72-824d-ee187f765225" 00:14:27.329 ], 00:14:27.329 "product_name": "Raid Volume", 00:14:27.329 "block_size": 512, 00:14:27.329 "num_blocks": 196608, 00:14:27.329 "uuid": "13353e5d-c783-4a72-824d-ee187f765225", 00:14:27.329 "assigned_rate_limits": { 00:14:27.329 "rw_ios_per_sec": 0, 00:14:27.329 "rw_mbytes_per_sec": 0, 00:14:27.329 "r_mbytes_per_sec": 0, 00:14:27.329 "w_mbytes_per_sec": 0 00:14:27.329 }, 00:14:27.329 "claimed": false, 00:14:27.329 "zoned": false, 00:14:27.329 "supported_io_types": { 00:14:27.329 "read": true, 00:14:27.329 "write": true, 00:14:27.329 "unmap": false, 00:14:27.329 "flush": false, 00:14:27.329 "reset": true, 00:14:27.329 "nvme_admin": false, 00:14:27.329 "nvme_io": false, 00:14:27.329 "nvme_io_md": false, 00:14:27.329 "write_zeroes": true, 00:14:27.329 "zcopy": false, 00:14:27.329 "get_zone_info": false, 00:14:27.329 "zone_management": false, 00:14:27.329 "zone_append": false, 00:14:27.329 "compare": false, 00:14:27.329 "compare_and_write": false, 00:14:27.329 "abort": false, 00:14:27.329 "seek_hole": false, 00:14:27.329 "seek_data": false, 00:14:27.329 "copy": false, 00:14:27.329 "nvme_iov_md": false 00:14:27.329 }, 00:14:27.329 "driver_specific": { 00:14:27.329 "raid": { 00:14:27.329 "uuid": "13353e5d-c783-4a72-824d-ee187f765225", 00:14:27.329 "strip_size_kb": 64, 00:14:27.329 "state": "online", 00:14:27.329 "raid_level": "raid5f", 00:14:27.329 "superblock": false, 00:14:27.329 "num_base_bdevs": 4, 00:14:27.329 "num_base_bdevs_discovered": 4, 00:14:27.329 "num_base_bdevs_operational": 4, 00:14:27.329 "base_bdevs_list": [ 00:14:27.329 { 00:14:27.329 "name": "NewBaseBdev", 00:14:27.329 "uuid": "7e41468b-d323-4727-bd0e-a55ef2c6847d", 00:14:27.329 "is_configured": true, 00:14:27.329 "data_offset": 0, 00:14:27.329 "data_size": 65536 00:14:27.329 }, 00:14:27.329 { 00:14:27.329 "name": "BaseBdev2", 00:14:27.329 "uuid": "adac95c7-9502-4f5b-a0be-fb3eaebf288a", 00:14:27.329 "is_configured": true, 00:14:27.329 "data_offset": 0, 00:14:27.329 "data_size": 65536 00:14:27.329 }, 00:14:27.329 { 00:14:27.329 "name": "BaseBdev3", 00:14:27.329 "uuid": "b72108c5-cf2e-4a49-b784-761a6804f475", 00:14:27.329 "is_configured": true, 00:14:27.329 "data_offset": 0, 00:14:27.329 "data_size": 65536 00:14:27.329 }, 00:14:27.329 { 00:14:27.329 "name": "BaseBdev4", 00:14:27.329 "uuid": "23fc04c3-9c8d-4245-a5b5-508b0a011cd2", 00:14:27.329 "is_configured": true, 00:14:27.329 "data_offset": 0, 00:14:27.329 "data_size": 65536 00:14:27.329 } 00:14:27.329 ] 00:14:27.329 } 00:14:27.329 } 00:14:27.329 }' 00:14:27.329 23:32:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:27.329 23:32:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:14:27.329 BaseBdev2 00:14:27.329 BaseBdev3 00:14:27.329 BaseBdev4' 00:14:27.329 23:32:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:27.589 23:32:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:27.589 23:32:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:27.589 23:32:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:14:27.589 23:32:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:27.589 23:32:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:27.589 23:32:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:27.589 23:32:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:27.589 23:32:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:27.589 23:32:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:27.589 23:32:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:27.589 23:32:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:14:27.589 23:32:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:27.589 23:32:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:27.589 23:32:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:27.589 23:32:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:27.589 23:32:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:27.589 23:32:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:27.589 23:32:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:27.589 23:32:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:27.589 23:32:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:14:27.589 23:32:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:27.589 23:32:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:27.589 23:32:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:27.589 23:32:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:27.589 23:32:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:27.589 23:32:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:27.589 23:32:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:14:27.589 23:32:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:27.589 23:32:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:27.589 23:32:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:27.589 23:32:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:27.589 23:32:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:27.589 23:32:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:27.589 23:32:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:27.589 23:32:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:27.589 23:32:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:27.589 [2024-09-30 23:32:07.412989] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:27.589 [2024-09-30 23:32:07.413060] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:27.589 [2024-09-30 23:32:07.413159] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:27.589 [2024-09-30 23:32:07.413433] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:27.589 [2024-09-30 23:32:07.413495] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name Existed_Raid, state offline 00:14:27.589 23:32:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:27.589 23:32:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 93294 00:14:27.590 23:32:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 93294 ']' 00:14:27.590 23:32:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@954 -- # kill -0 93294 00:14:27.590 23:32:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@955 -- # uname 00:14:27.590 23:32:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:27.590 23:32:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 93294 00:14:27.849 killing process with pid 93294 00:14:27.849 23:32:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:27.849 23:32:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:27.849 23:32:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 93294' 00:14:27.849 23:32:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@969 -- # kill 93294 00:14:27.849 [2024-09-30 23:32:07.460240] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:27.849 23:32:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@974 -- # wait 93294 00:14:27.849 [2024-09-30 23:32:07.499179] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:28.109 23:32:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:14:28.109 00:14:28.109 real 0m9.515s 00:14:28.109 user 0m16.151s 00:14:28.109 sys 0m2.067s 00:14:28.109 23:32:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:28.109 23:32:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:28.109 ************************************ 00:14:28.109 END TEST raid5f_state_function_test 00:14:28.109 ************************************ 00:14:28.109 23:32:07 bdev_raid -- bdev/bdev_raid.sh@987 -- # run_test raid5f_state_function_test_sb raid_state_function_test raid5f 4 true 00:14:28.109 23:32:07 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:14:28.109 23:32:07 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:28.109 23:32:07 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:28.109 ************************************ 00:14:28.109 START TEST raid5f_state_function_test_sb 00:14:28.109 ************************************ 00:14:28.109 23:32:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test raid5f 4 true 00:14:28.109 23:32:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:14:28.109 23:32:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:14:28.109 23:32:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:14:28.109 23:32:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:14:28.109 23:32:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:14:28.109 23:32:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:28.109 23:32:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:14:28.109 23:32:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:28.109 23:32:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:28.109 23:32:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:14:28.110 23:32:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:28.110 23:32:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:28.110 23:32:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:14:28.110 23:32:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:28.110 23:32:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:28.110 23:32:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:14:28.110 23:32:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:28.110 23:32:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:28.110 23:32:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:14:28.110 23:32:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:14:28.110 23:32:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:14:28.110 23:32:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:14:28.110 23:32:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:14:28.110 23:32:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:14:28.110 23:32:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:14:28.110 23:32:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:14:28.110 23:32:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:14:28.110 23:32:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:14:28.110 23:32:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:14:28.110 23:32:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=93940 00:14:28.110 23:32:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:14:28.110 23:32:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 93940' 00:14:28.110 Process raid pid: 93940 00:14:28.110 23:32:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 93940 00:14:28.110 23:32:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 93940 ']' 00:14:28.110 23:32:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:28.110 23:32:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:28.110 23:32:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:28.110 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:28.110 23:32:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:28.110 23:32:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:28.110 [2024-09-30 23:32:07.905130] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:14:28.110 [2024-09-30 23:32:07.905343] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:28.370 [2024-09-30 23:32:08.066055] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:28.370 [2024-09-30 23:32:08.111612] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:14:28.370 [2024-09-30 23:32:08.153591] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:28.370 [2024-09-30 23:32:08.153700] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:28.939 23:32:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:28.939 23:32:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:14:28.939 23:32:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:14:28.939 23:32:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:28.939 23:32:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:28.939 [2024-09-30 23:32:08.735111] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:28.939 [2024-09-30 23:32:08.735238] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:28.939 [2024-09-30 23:32:08.735269] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:28.939 [2024-09-30 23:32:08.735292] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:28.939 [2024-09-30 23:32:08.735309] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:28.939 [2024-09-30 23:32:08.735348] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:28.939 [2024-09-30 23:32:08.735374] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:14:28.939 [2024-09-30 23:32:08.735394] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:14:28.939 23:32:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:28.939 23:32:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:28.940 23:32:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:28.940 23:32:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:28.940 23:32:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:28.940 23:32:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:28.940 23:32:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:28.940 23:32:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:28.940 23:32:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:28.940 23:32:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:28.940 23:32:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:28.940 23:32:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:28.940 23:32:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:28.940 23:32:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:28.940 23:32:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:28.940 23:32:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:28.940 23:32:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:28.940 "name": "Existed_Raid", 00:14:28.940 "uuid": "d70eaa9c-e257-4db1-ac19-1261cf1df201", 00:14:28.940 "strip_size_kb": 64, 00:14:28.940 "state": "configuring", 00:14:28.940 "raid_level": "raid5f", 00:14:28.940 "superblock": true, 00:14:28.940 "num_base_bdevs": 4, 00:14:28.940 "num_base_bdevs_discovered": 0, 00:14:28.940 "num_base_bdevs_operational": 4, 00:14:28.940 "base_bdevs_list": [ 00:14:28.940 { 00:14:28.940 "name": "BaseBdev1", 00:14:28.940 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:28.940 "is_configured": false, 00:14:28.940 "data_offset": 0, 00:14:28.940 "data_size": 0 00:14:28.940 }, 00:14:28.940 { 00:14:28.940 "name": "BaseBdev2", 00:14:28.940 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:28.940 "is_configured": false, 00:14:28.940 "data_offset": 0, 00:14:28.940 "data_size": 0 00:14:28.940 }, 00:14:28.940 { 00:14:28.940 "name": "BaseBdev3", 00:14:28.940 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:28.940 "is_configured": false, 00:14:28.940 "data_offset": 0, 00:14:28.940 "data_size": 0 00:14:28.940 }, 00:14:28.940 { 00:14:28.940 "name": "BaseBdev4", 00:14:28.940 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:28.940 "is_configured": false, 00:14:28.940 "data_offset": 0, 00:14:28.940 "data_size": 0 00:14:28.940 } 00:14:28.940 ] 00:14:28.940 }' 00:14:28.940 23:32:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:28.940 23:32:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:29.510 23:32:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:29.510 23:32:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:29.510 23:32:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:29.510 [2024-09-30 23:32:09.162277] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:29.510 [2024-09-30 23:32:09.162317] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring 00:14:29.510 23:32:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:29.510 23:32:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:14:29.510 23:32:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:29.510 23:32:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:29.510 [2024-09-30 23:32:09.174291] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:29.510 [2024-09-30 23:32:09.174371] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:29.510 [2024-09-30 23:32:09.174412] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:29.510 [2024-09-30 23:32:09.174433] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:29.510 [2024-09-30 23:32:09.174451] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:29.510 [2024-09-30 23:32:09.174470] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:29.510 [2024-09-30 23:32:09.174488] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:14:29.510 [2024-09-30 23:32:09.174508] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:14:29.510 23:32:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:29.510 23:32:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:14:29.510 23:32:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:29.510 23:32:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:29.510 BaseBdev1 00:14:29.510 [2024-09-30 23:32:09.194997] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:29.510 23:32:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:29.510 23:32:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:14:29.510 23:32:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:14:29.510 23:32:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:29.510 23:32:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:14:29.510 23:32:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:29.510 23:32:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:29.510 23:32:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:14:29.510 23:32:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:29.510 23:32:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:29.510 23:32:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:29.510 23:32:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:29.510 23:32:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:29.510 23:32:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:29.510 [ 00:14:29.510 { 00:14:29.510 "name": "BaseBdev1", 00:14:29.510 "aliases": [ 00:14:29.510 "83d027d7-dc0d-4ac0-9257-a481e7b0977b" 00:14:29.510 ], 00:14:29.510 "product_name": "Malloc disk", 00:14:29.510 "block_size": 512, 00:14:29.510 "num_blocks": 65536, 00:14:29.510 "uuid": "83d027d7-dc0d-4ac0-9257-a481e7b0977b", 00:14:29.510 "assigned_rate_limits": { 00:14:29.510 "rw_ios_per_sec": 0, 00:14:29.510 "rw_mbytes_per_sec": 0, 00:14:29.510 "r_mbytes_per_sec": 0, 00:14:29.510 "w_mbytes_per_sec": 0 00:14:29.510 }, 00:14:29.510 "claimed": true, 00:14:29.510 "claim_type": "exclusive_write", 00:14:29.510 "zoned": false, 00:14:29.510 "supported_io_types": { 00:14:29.510 "read": true, 00:14:29.510 "write": true, 00:14:29.510 "unmap": true, 00:14:29.510 "flush": true, 00:14:29.510 "reset": true, 00:14:29.510 "nvme_admin": false, 00:14:29.510 "nvme_io": false, 00:14:29.510 "nvme_io_md": false, 00:14:29.510 "write_zeroes": true, 00:14:29.510 "zcopy": true, 00:14:29.510 "get_zone_info": false, 00:14:29.510 "zone_management": false, 00:14:29.510 "zone_append": false, 00:14:29.510 "compare": false, 00:14:29.510 "compare_and_write": false, 00:14:29.510 "abort": true, 00:14:29.510 "seek_hole": false, 00:14:29.510 "seek_data": false, 00:14:29.510 "copy": true, 00:14:29.510 "nvme_iov_md": false 00:14:29.510 }, 00:14:29.510 "memory_domains": [ 00:14:29.510 { 00:14:29.510 "dma_device_id": "system", 00:14:29.511 "dma_device_type": 1 00:14:29.511 }, 00:14:29.511 { 00:14:29.511 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:29.511 "dma_device_type": 2 00:14:29.511 } 00:14:29.511 ], 00:14:29.511 "driver_specific": {} 00:14:29.511 } 00:14:29.511 ] 00:14:29.511 23:32:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:29.511 23:32:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:14:29.511 23:32:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:29.511 23:32:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:29.511 23:32:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:29.511 23:32:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:29.511 23:32:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:29.511 23:32:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:29.511 23:32:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:29.511 23:32:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:29.511 23:32:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:29.511 23:32:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:29.511 23:32:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:29.511 23:32:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:29.511 23:32:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:29.511 23:32:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:29.511 23:32:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:29.511 23:32:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:29.511 "name": "Existed_Raid", 00:14:29.511 "uuid": "66217e30-b64c-4014-89f2-c44d86f4b8d3", 00:14:29.511 "strip_size_kb": 64, 00:14:29.511 "state": "configuring", 00:14:29.511 "raid_level": "raid5f", 00:14:29.511 "superblock": true, 00:14:29.511 "num_base_bdevs": 4, 00:14:29.511 "num_base_bdevs_discovered": 1, 00:14:29.511 "num_base_bdevs_operational": 4, 00:14:29.511 "base_bdevs_list": [ 00:14:29.511 { 00:14:29.511 "name": "BaseBdev1", 00:14:29.511 "uuid": "83d027d7-dc0d-4ac0-9257-a481e7b0977b", 00:14:29.511 "is_configured": true, 00:14:29.511 "data_offset": 2048, 00:14:29.511 "data_size": 63488 00:14:29.511 }, 00:14:29.511 { 00:14:29.511 "name": "BaseBdev2", 00:14:29.511 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:29.511 "is_configured": false, 00:14:29.511 "data_offset": 0, 00:14:29.511 "data_size": 0 00:14:29.511 }, 00:14:29.511 { 00:14:29.511 "name": "BaseBdev3", 00:14:29.511 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:29.511 "is_configured": false, 00:14:29.511 "data_offset": 0, 00:14:29.511 "data_size": 0 00:14:29.511 }, 00:14:29.511 { 00:14:29.511 "name": "BaseBdev4", 00:14:29.511 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:29.511 "is_configured": false, 00:14:29.511 "data_offset": 0, 00:14:29.511 "data_size": 0 00:14:29.511 } 00:14:29.511 ] 00:14:29.511 }' 00:14:29.511 23:32:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:29.511 23:32:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:30.081 23:32:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:30.081 23:32:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:30.081 23:32:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:30.081 [2024-09-30 23:32:09.698164] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:30.081 [2024-09-30 23:32:09.698255] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring 00:14:30.081 23:32:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:30.081 23:32:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:14:30.081 23:32:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:30.081 23:32:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:30.081 [2024-09-30 23:32:09.710187] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:30.081 [2024-09-30 23:32:09.712079] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:30.081 [2024-09-30 23:32:09.712180] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:30.081 [2024-09-30 23:32:09.712206] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:30.081 [2024-09-30 23:32:09.712227] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:30.081 [2024-09-30 23:32:09.712244] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:14:30.081 [2024-09-30 23:32:09.712264] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:14:30.081 23:32:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:30.081 23:32:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:14:30.081 23:32:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:30.081 23:32:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:30.081 23:32:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:30.081 23:32:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:30.081 23:32:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:30.081 23:32:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:30.081 23:32:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:30.081 23:32:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:30.081 23:32:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:30.081 23:32:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:30.081 23:32:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:30.081 23:32:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:30.081 23:32:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:30.081 23:32:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:30.081 23:32:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:30.081 23:32:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:30.081 23:32:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:30.081 "name": "Existed_Raid", 00:14:30.081 "uuid": "bb7ef745-f8b4-4733-a8c8-ff29741632d8", 00:14:30.081 "strip_size_kb": 64, 00:14:30.081 "state": "configuring", 00:14:30.081 "raid_level": "raid5f", 00:14:30.081 "superblock": true, 00:14:30.081 "num_base_bdevs": 4, 00:14:30.081 "num_base_bdevs_discovered": 1, 00:14:30.081 "num_base_bdevs_operational": 4, 00:14:30.081 "base_bdevs_list": [ 00:14:30.081 { 00:14:30.081 "name": "BaseBdev1", 00:14:30.081 "uuid": "83d027d7-dc0d-4ac0-9257-a481e7b0977b", 00:14:30.081 "is_configured": true, 00:14:30.081 "data_offset": 2048, 00:14:30.081 "data_size": 63488 00:14:30.081 }, 00:14:30.081 { 00:14:30.081 "name": "BaseBdev2", 00:14:30.081 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:30.081 "is_configured": false, 00:14:30.081 "data_offset": 0, 00:14:30.081 "data_size": 0 00:14:30.081 }, 00:14:30.081 { 00:14:30.081 "name": "BaseBdev3", 00:14:30.081 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:30.081 "is_configured": false, 00:14:30.081 "data_offset": 0, 00:14:30.081 "data_size": 0 00:14:30.081 }, 00:14:30.081 { 00:14:30.081 "name": "BaseBdev4", 00:14:30.081 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:30.081 "is_configured": false, 00:14:30.081 "data_offset": 0, 00:14:30.081 "data_size": 0 00:14:30.081 } 00:14:30.081 ] 00:14:30.081 }' 00:14:30.081 23:32:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:30.081 23:32:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:30.341 23:32:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:14:30.341 23:32:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:30.341 23:32:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:30.601 [2024-09-30 23:32:10.199571] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:30.601 BaseBdev2 00:14:30.601 23:32:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:30.601 23:32:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:14:30.601 23:32:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:14:30.601 23:32:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:30.601 23:32:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:14:30.601 23:32:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:30.601 23:32:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:30.601 23:32:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:14:30.601 23:32:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:30.601 23:32:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:30.601 23:32:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:30.601 23:32:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:30.601 23:32:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:30.601 23:32:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:30.601 [ 00:14:30.601 { 00:14:30.601 "name": "BaseBdev2", 00:14:30.601 "aliases": [ 00:14:30.601 "f5302348-ad63-49f2-9e76-bb9fbd0fa1c9" 00:14:30.601 ], 00:14:30.601 "product_name": "Malloc disk", 00:14:30.601 "block_size": 512, 00:14:30.601 "num_blocks": 65536, 00:14:30.601 "uuid": "f5302348-ad63-49f2-9e76-bb9fbd0fa1c9", 00:14:30.601 "assigned_rate_limits": { 00:14:30.601 "rw_ios_per_sec": 0, 00:14:30.601 "rw_mbytes_per_sec": 0, 00:14:30.601 "r_mbytes_per_sec": 0, 00:14:30.601 "w_mbytes_per_sec": 0 00:14:30.601 }, 00:14:30.601 "claimed": true, 00:14:30.601 "claim_type": "exclusive_write", 00:14:30.601 "zoned": false, 00:14:30.601 "supported_io_types": { 00:14:30.601 "read": true, 00:14:30.601 "write": true, 00:14:30.601 "unmap": true, 00:14:30.601 "flush": true, 00:14:30.601 "reset": true, 00:14:30.601 "nvme_admin": false, 00:14:30.601 "nvme_io": false, 00:14:30.601 "nvme_io_md": false, 00:14:30.601 "write_zeroes": true, 00:14:30.601 "zcopy": true, 00:14:30.601 "get_zone_info": false, 00:14:30.601 "zone_management": false, 00:14:30.601 "zone_append": false, 00:14:30.601 "compare": false, 00:14:30.601 "compare_and_write": false, 00:14:30.601 "abort": true, 00:14:30.601 "seek_hole": false, 00:14:30.601 "seek_data": false, 00:14:30.601 "copy": true, 00:14:30.601 "nvme_iov_md": false 00:14:30.601 }, 00:14:30.601 "memory_domains": [ 00:14:30.601 { 00:14:30.601 "dma_device_id": "system", 00:14:30.601 "dma_device_type": 1 00:14:30.601 }, 00:14:30.601 { 00:14:30.601 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:30.601 "dma_device_type": 2 00:14:30.601 } 00:14:30.601 ], 00:14:30.601 "driver_specific": {} 00:14:30.601 } 00:14:30.601 ] 00:14:30.601 23:32:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:30.601 23:32:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:14:30.601 23:32:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:30.601 23:32:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:30.601 23:32:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:30.601 23:32:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:30.601 23:32:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:30.601 23:32:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:30.602 23:32:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:30.602 23:32:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:30.602 23:32:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:30.602 23:32:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:30.602 23:32:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:30.602 23:32:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:30.602 23:32:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:30.602 23:32:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:30.602 23:32:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:30.602 23:32:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:30.602 23:32:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:30.602 23:32:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:30.602 "name": "Existed_Raid", 00:14:30.602 "uuid": "bb7ef745-f8b4-4733-a8c8-ff29741632d8", 00:14:30.602 "strip_size_kb": 64, 00:14:30.602 "state": "configuring", 00:14:30.602 "raid_level": "raid5f", 00:14:30.602 "superblock": true, 00:14:30.602 "num_base_bdevs": 4, 00:14:30.602 "num_base_bdevs_discovered": 2, 00:14:30.602 "num_base_bdevs_operational": 4, 00:14:30.602 "base_bdevs_list": [ 00:14:30.602 { 00:14:30.602 "name": "BaseBdev1", 00:14:30.602 "uuid": "83d027d7-dc0d-4ac0-9257-a481e7b0977b", 00:14:30.602 "is_configured": true, 00:14:30.602 "data_offset": 2048, 00:14:30.602 "data_size": 63488 00:14:30.602 }, 00:14:30.602 { 00:14:30.602 "name": "BaseBdev2", 00:14:30.602 "uuid": "f5302348-ad63-49f2-9e76-bb9fbd0fa1c9", 00:14:30.602 "is_configured": true, 00:14:30.602 "data_offset": 2048, 00:14:30.602 "data_size": 63488 00:14:30.602 }, 00:14:30.602 { 00:14:30.602 "name": "BaseBdev3", 00:14:30.602 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:30.602 "is_configured": false, 00:14:30.602 "data_offset": 0, 00:14:30.602 "data_size": 0 00:14:30.602 }, 00:14:30.602 { 00:14:30.602 "name": "BaseBdev4", 00:14:30.602 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:30.602 "is_configured": false, 00:14:30.602 "data_offset": 0, 00:14:30.602 "data_size": 0 00:14:30.602 } 00:14:30.602 ] 00:14:30.602 }' 00:14:30.602 23:32:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:30.602 23:32:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:30.862 23:32:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:14:30.862 23:32:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:30.862 23:32:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:30.862 [2024-09-30 23:32:10.593901] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:30.862 BaseBdev3 00:14:30.862 23:32:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:30.862 23:32:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:14:30.862 23:32:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:14:30.862 23:32:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:30.862 23:32:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:14:30.862 23:32:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:30.862 23:32:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:30.862 23:32:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:14:30.862 23:32:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:30.862 23:32:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:30.862 23:32:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:30.862 23:32:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:30.862 23:32:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:30.862 23:32:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:30.862 [ 00:14:30.862 { 00:14:30.862 "name": "BaseBdev3", 00:14:30.862 "aliases": [ 00:14:30.862 "9133c7cd-5592-41c7-bc56-af8fd0071446" 00:14:30.862 ], 00:14:30.862 "product_name": "Malloc disk", 00:14:30.862 "block_size": 512, 00:14:30.862 "num_blocks": 65536, 00:14:30.862 "uuid": "9133c7cd-5592-41c7-bc56-af8fd0071446", 00:14:30.862 "assigned_rate_limits": { 00:14:30.862 "rw_ios_per_sec": 0, 00:14:30.862 "rw_mbytes_per_sec": 0, 00:14:30.862 "r_mbytes_per_sec": 0, 00:14:30.862 "w_mbytes_per_sec": 0 00:14:30.862 }, 00:14:30.862 "claimed": true, 00:14:30.862 "claim_type": "exclusive_write", 00:14:30.862 "zoned": false, 00:14:30.862 "supported_io_types": { 00:14:30.862 "read": true, 00:14:30.862 "write": true, 00:14:30.862 "unmap": true, 00:14:30.862 "flush": true, 00:14:30.862 "reset": true, 00:14:30.862 "nvme_admin": false, 00:14:30.862 "nvme_io": false, 00:14:30.862 "nvme_io_md": false, 00:14:30.862 "write_zeroes": true, 00:14:30.862 "zcopy": true, 00:14:30.862 "get_zone_info": false, 00:14:30.862 "zone_management": false, 00:14:30.862 "zone_append": false, 00:14:30.862 "compare": false, 00:14:30.862 "compare_and_write": false, 00:14:30.862 "abort": true, 00:14:30.862 "seek_hole": false, 00:14:30.862 "seek_data": false, 00:14:30.862 "copy": true, 00:14:30.862 "nvme_iov_md": false 00:14:30.862 }, 00:14:30.862 "memory_domains": [ 00:14:30.862 { 00:14:30.862 "dma_device_id": "system", 00:14:30.862 "dma_device_type": 1 00:14:30.862 }, 00:14:30.862 { 00:14:30.862 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:30.862 "dma_device_type": 2 00:14:30.862 } 00:14:30.862 ], 00:14:30.862 "driver_specific": {} 00:14:30.862 } 00:14:30.862 ] 00:14:30.862 23:32:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:30.862 23:32:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:14:30.862 23:32:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:30.862 23:32:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:30.862 23:32:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:30.862 23:32:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:30.862 23:32:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:30.862 23:32:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:30.862 23:32:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:30.862 23:32:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:30.862 23:32:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:30.862 23:32:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:30.862 23:32:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:30.862 23:32:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:30.862 23:32:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:30.862 23:32:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:30.862 23:32:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:30.862 23:32:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:30.862 23:32:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:30.862 23:32:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:30.862 "name": "Existed_Raid", 00:14:30.862 "uuid": "bb7ef745-f8b4-4733-a8c8-ff29741632d8", 00:14:30.862 "strip_size_kb": 64, 00:14:30.862 "state": "configuring", 00:14:30.862 "raid_level": "raid5f", 00:14:30.862 "superblock": true, 00:14:30.862 "num_base_bdevs": 4, 00:14:30.862 "num_base_bdevs_discovered": 3, 00:14:30.862 "num_base_bdevs_operational": 4, 00:14:30.862 "base_bdevs_list": [ 00:14:30.862 { 00:14:30.862 "name": "BaseBdev1", 00:14:30.862 "uuid": "83d027d7-dc0d-4ac0-9257-a481e7b0977b", 00:14:30.862 "is_configured": true, 00:14:30.862 "data_offset": 2048, 00:14:30.862 "data_size": 63488 00:14:30.862 }, 00:14:30.862 { 00:14:30.862 "name": "BaseBdev2", 00:14:30.862 "uuid": "f5302348-ad63-49f2-9e76-bb9fbd0fa1c9", 00:14:30.862 "is_configured": true, 00:14:30.862 "data_offset": 2048, 00:14:30.862 "data_size": 63488 00:14:30.862 }, 00:14:30.862 { 00:14:30.862 "name": "BaseBdev3", 00:14:30.862 "uuid": "9133c7cd-5592-41c7-bc56-af8fd0071446", 00:14:30.862 "is_configured": true, 00:14:30.862 "data_offset": 2048, 00:14:30.862 "data_size": 63488 00:14:30.862 }, 00:14:30.862 { 00:14:30.862 "name": "BaseBdev4", 00:14:30.862 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:30.862 "is_configured": false, 00:14:30.862 "data_offset": 0, 00:14:30.862 "data_size": 0 00:14:30.862 } 00:14:30.862 ] 00:14:30.862 }' 00:14:30.862 23:32:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:30.862 23:32:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:31.432 23:32:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:14:31.432 23:32:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:31.432 23:32:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:31.432 [2024-09-30 23:32:11.036133] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:31.432 [2024-09-30 23:32:11.036430] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:14:31.432 [2024-09-30 23:32:11.036483] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:14:31.432 [2024-09-30 23:32:11.036790] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:14:31.432 BaseBdev4 00:14:31.432 [2024-09-30 23:32:11.037270] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:14:31.432 [2024-09-30 23:32:11.037330] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980 00:14:31.432 [2024-09-30 23:32:11.037485] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:31.432 23:32:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:31.432 23:32:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:14:31.432 23:32:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:14:31.432 23:32:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:31.432 23:32:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:14:31.432 23:32:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:31.432 23:32:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:31.432 23:32:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:14:31.432 23:32:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:31.432 23:32:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:31.432 23:32:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:31.432 23:32:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:14:31.432 23:32:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:31.432 23:32:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:31.432 [ 00:14:31.432 { 00:14:31.432 "name": "BaseBdev4", 00:14:31.432 "aliases": [ 00:14:31.432 "64613742-661e-4790-9d52-63ec32b69ebc" 00:14:31.432 ], 00:14:31.432 "product_name": "Malloc disk", 00:14:31.432 "block_size": 512, 00:14:31.432 "num_blocks": 65536, 00:14:31.432 "uuid": "64613742-661e-4790-9d52-63ec32b69ebc", 00:14:31.432 "assigned_rate_limits": { 00:14:31.432 "rw_ios_per_sec": 0, 00:14:31.432 "rw_mbytes_per_sec": 0, 00:14:31.432 "r_mbytes_per_sec": 0, 00:14:31.432 "w_mbytes_per_sec": 0 00:14:31.432 }, 00:14:31.432 "claimed": true, 00:14:31.432 "claim_type": "exclusive_write", 00:14:31.432 "zoned": false, 00:14:31.432 "supported_io_types": { 00:14:31.432 "read": true, 00:14:31.432 "write": true, 00:14:31.432 "unmap": true, 00:14:31.432 "flush": true, 00:14:31.432 "reset": true, 00:14:31.432 "nvme_admin": false, 00:14:31.432 "nvme_io": false, 00:14:31.432 "nvme_io_md": false, 00:14:31.432 "write_zeroes": true, 00:14:31.432 "zcopy": true, 00:14:31.432 "get_zone_info": false, 00:14:31.432 "zone_management": false, 00:14:31.432 "zone_append": false, 00:14:31.432 "compare": false, 00:14:31.432 "compare_and_write": false, 00:14:31.432 "abort": true, 00:14:31.432 "seek_hole": false, 00:14:31.432 "seek_data": false, 00:14:31.432 "copy": true, 00:14:31.432 "nvme_iov_md": false 00:14:31.432 }, 00:14:31.432 "memory_domains": [ 00:14:31.432 { 00:14:31.432 "dma_device_id": "system", 00:14:31.432 "dma_device_type": 1 00:14:31.432 }, 00:14:31.432 { 00:14:31.432 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:31.432 "dma_device_type": 2 00:14:31.432 } 00:14:31.432 ], 00:14:31.432 "driver_specific": {} 00:14:31.432 } 00:14:31.432 ] 00:14:31.432 23:32:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:31.432 23:32:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:14:31.432 23:32:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:31.432 23:32:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:31.432 23:32:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:14:31.432 23:32:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:31.432 23:32:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:31.432 23:32:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:31.432 23:32:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:31.432 23:32:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:31.432 23:32:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:31.432 23:32:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:31.432 23:32:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:31.432 23:32:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:31.432 23:32:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:31.432 23:32:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:31.432 23:32:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:31.432 23:32:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:31.432 23:32:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:31.432 23:32:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:31.432 "name": "Existed_Raid", 00:14:31.432 "uuid": "bb7ef745-f8b4-4733-a8c8-ff29741632d8", 00:14:31.432 "strip_size_kb": 64, 00:14:31.432 "state": "online", 00:14:31.432 "raid_level": "raid5f", 00:14:31.432 "superblock": true, 00:14:31.432 "num_base_bdevs": 4, 00:14:31.432 "num_base_bdevs_discovered": 4, 00:14:31.432 "num_base_bdevs_operational": 4, 00:14:31.432 "base_bdevs_list": [ 00:14:31.432 { 00:14:31.432 "name": "BaseBdev1", 00:14:31.432 "uuid": "83d027d7-dc0d-4ac0-9257-a481e7b0977b", 00:14:31.432 "is_configured": true, 00:14:31.432 "data_offset": 2048, 00:14:31.432 "data_size": 63488 00:14:31.432 }, 00:14:31.432 { 00:14:31.432 "name": "BaseBdev2", 00:14:31.432 "uuid": "f5302348-ad63-49f2-9e76-bb9fbd0fa1c9", 00:14:31.432 "is_configured": true, 00:14:31.432 "data_offset": 2048, 00:14:31.432 "data_size": 63488 00:14:31.432 }, 00:14:31.432 { 00:14:31.432 "name": "BaseBdev3", 00:14:31.432 "uuid": "9133c7cd-5592-41c7-bc56-af8fd0071446", 00:14:31.432 "is_configured": true, 00:14:31.432 "data_offset": 2048, 00:14:31.432 "data_size": 63488 00:14:31.432 }, 00:14:31.432 { 00:14:31.432 "name": "BaseBdev4", 00:14:31.432 "uuid": "64613742-661e-4790-9d52-63ec32b69ebc", 00:14:31.432 "is_configured": true, 00:14:31.432 "data_offset": 2048, 00:14:31.432 "data_size": 63488 00:14:31.432 } 00:14:31.432 ] 00:14:31.432 }' 00:14:31.432 23:32:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:31.432 23:32:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:31.692 23:32:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:14:31.692 23:32:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:14:31.692 23:32:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:31.692 23:32:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:31.692 23:32:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:14:31.692 23:32:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:31.692 23:32:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:14:31.692 23:32:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:31.692 23:32:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:31.692 23:32:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:31.692 [2024-09-30 23:32:11.483652] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:31.693 23:32:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:31.693 23:32:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:31.693 "name": "Existed_Raid", 00:14:31.693 "aliases": [ 00:14:31.693 "bb7ef745-f8b4-4733-a8c8-ff29741632d8" 00:14:31.693 ], 00:14:31.693 "product_name": "Raid Volume", 00:14:31.693 "block_size": 512, 00:14:31.693 "num_blocks": 190464, 00:14:31.693 "uuid": "bb7ef745-f8b4-4733-a8c8-ff29741632d8", 00:14:31.693 "assigned_rate_limits": { 00:14:31.693 "rw_ios_per_sec": 0, 00:14:31.693 "rw_mbytes_per_sec": 0, 00:14:31.693 "r_mbytes_per_sec": 0, 00:14:31.693 "w_mbytes_per_sec": 0 00:14:31.693 }, 00:14:31.693 "claimed": false, 00:14:31.693 "zoned": false, 00:14:31.693 "supported_io_types": { 00:14:31.693 "read": true, 00:14:31.693 "write": true, 00:14:31.693 "unmap": false, 00:14:31.693 "flush": false, 00:14:31.693 "reset": true, 00:14:31.693 "nvme_admin": false, 00:14:31.693 "nvme_io": false, 00:14:31.693 "nvme_io_md": false, 00:14:31.693 "write_zeroes": true, 00:14:31.693 "zcopy": false, 00:14:31.693 "get_zone_info": false, 00:14:31.693 "zone_management": false, 00:14:31.693 "zone_append": false, 00:14:31.693 "compare": false, 00:14:31.693 "compare_and_write": false, 00:14:31.693 "abort": false, 00:14:31.693 "seek_hole": false, 00:14:31.693 "seek_data": false, 00:14:31.693 "copy": false, 00:14:31.693 "nvme_iov_md": false 00:14:31.693 }, 00:14:31.693 "driver_specific": { 00:14:31.693 "raid": { 00:14:31.693 "uuid": "bb7ef745-f8b4-4733-a8c8-ff29741632d8", 00:14:31.693 "strip_size_kb": 64, 00:14:31.693 "state": "online", 00:14:31.693 "raid_level": "raid5f", 00:14:31.693 "superblock": true, 00:14:31.693 "num_base_bdevs": 4, 00:14:31.693 "num_base_bdevs_discovered": 4, 00:14:31.693 "num_base_bdevs_operational": 4, 00:14:31.693 "base_bdevs_list": [ 00:14:31.693 { 00:14:31.693 "name": "BaseBdev1", 00:14:31.693 "uuid": "83d027d7-dc0d-4ac0-9257-a481e7b0977b", 00:14:31.693 "is_configured": true, 00:14:31.693 "data_offset": 2048, 00:14:31.693 "data_size": 63488 00:14:31.693 }, 00:14:31.693 { 00:14:31.693 "name": "BaseBdev2", 00:14:31.693 "uuid": "f5302348-ad63-49f2-9e76-bb9fbd0fa1c9", 00:14:31.693 "is_configured": true, 00:14:31.693 "data_offset": 2048, 00:14:31.693 "data_size": 63488 00:14:31.693 }, 00:14:31.693 { 00:14:31.693 "name": "BaseBdev3", 00:14:31.693 "uuid": "9133c7cd-5592-41c7-bc56-af8fd0071446", 00:14:31.693 "is_configured": true, 00:14:31.693 "data_offset": 2048, 00:14:31.693 "data_size": 63488 00:14:31.693 }, 00:14:31.693 { 00:14:31.693 "name": "BaseBdev4", 00:14:31.693 "uuid": "64613742-661e-4790-9d52-63ec32b69ebc", 00:14:31.693 "is_configured": true, 00:14:31.693 "data_offset": 2048, 00:14:31.693 "data_size": 63488 00:14:31.693 } 00:14:31.693 ] 00:14:31.693 } 00:14:31.693 } 00:14:31.693 }' 00:14:31.693 23:32:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:31.953 23:32:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:14:31.953 BaseBdev2 00:14:31.953 BaseBdev3 00:14:31.953 BaseBdev4' 00:14:31.953 23:32:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:31.953 23:32:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:31.953 23:32:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:31.953 23:32:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:31.953 23:32:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:14:31.953 23:32:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:31.953 23:32:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:31.953 23:32:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:31.953 23:32:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:31.953 23:32:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:31.953 23:32:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:31.953 23:32:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:14:31.953 23:32:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:31.953 23:32:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:31.953 23:32:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:31.953 23:32:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:31.953 23:32:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:31.953 23:32:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:31.953 23:32:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:31.953 23:32:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:14:31.953 23:32:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:31.953 23:32:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:31.953 23:32:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:31.953 23:32:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:31.953 23:32:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:31.953 23:32:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:31.953 23:32:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:31.953 23:32:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:14:31.953 23:32:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:31.953 23:32:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:31.953 23:32:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:31.953 23:32:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:32.212 23:32:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:32.212 23:32:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:32.212 23:32:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:14:32.212 23:32:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:32.212 23:32:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:32.212 [2024-09-30 23:32:11.822891] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:32.212 23:32:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:32.212 23:32:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:14:32.212 23:32:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:14:32.212 23:32:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:14:32.212 23:32:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:14:32.212 23:32:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:14:32.212 23:32:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:14:32.212 23:32:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:32.212 23:32:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:32.212 23:32:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:32.212 23:32:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:32.212 23:32:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:32.212 23:32:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:32.213 23:32:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:32.213 23:32:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:32.213 23:32:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:32.213 23:32:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:32.213 23:32:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:32.213 23:32:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:32.213 23:32:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:32.213 23:32:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:32.213 23:32:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:32.213 "name": "Existed_Raid", 00:14:32.213 "uuid": "bb7ef745-f8b4-4733-a8c8-ff29741632d8", 00:14:32.213 "strip_size_kb": 64, 00:14:32.213 "state": "online", 00:14:32.213 "raid_level": "raid5f", 00:14:32.213 "superblock": true, 00:14:32.213 "num_base_bdevs": 4, 00:14:32.213 "num_base_bdevs_discovered": 3, 00:14:32.213 "num_base_bdevs_operational": 3, 00:14:32.213 "base_bdevs_list": [ 00:14:32.213 { 00:14:32.213 "name": null, 00:14:32.213 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:32.213 "is_configured": false, 00:14:32.213 "data_offset": 0, 00:14:32.213 "data_size": 63488 00:14:32.213 }, 00:14:32.213 { 00:14:32.213 "name": "BaseBdev2", 00:14:32.213 "uuid": "f5302348-ad63-49f2-9e76-bb9fbd0fa1c9", 00:14:32.213 "is_configured": true, 00:14:32.213 "data_offset": 2048, 00:14:32.213 "data_size": 63488 00:14:32.213 }, 00:14:32.213 { 00:14:32.213 "name": "BaseBdev3", 00:14:32.213 "uuid": "9133c7cd-5592-41c7-bc56-af8fd0071446", 00:14:32.213 "is_configured": true, 00:14:32.213 "data_offset": 2048, 00:14:32.213 "data_size": 63488 00:14:32.213 }, 00:14:32.213 { 00:14:32.213 "name": "BaseBdev4", 00:14:32.213 "uuid": "64613742-661e-4790-9d52-63ec32b69ebc", 00:14:32.213 "is_configured": true, 00:14:32.213 "data_offset": 2048, 00:14:32.213 "data_size": 63488 00:14:32.213 } 00:14:32.213 ] 00:14:32.213 }' 00:14:32.213 23:32:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:32.213 23:32:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:32.472 23:32:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:14:32.472 23:32:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:32.472 23:32:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:32.472 23:32:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:32.472 23:32:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:32.472 23:32:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:32.472 23:32:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:32.472 23:32:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:32.472 23:32:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:32.472 23:32:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:14:32.472 23:32:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:32.472 23:32:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:32.472 [2024-09-30 23:32:12.317346] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:32.472 [2024-09-30 23:32:12.317554] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:32.732 [2024-09-30 23:32:12.328491] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:32.732 23:32:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:32.732 23:32:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:32.732 23:32:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:32.732 23:32:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:32.732 23:32:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:32.732 23:32:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:32.732 23:32:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:32.732 23:32:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:32.732 23:32:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:32.732 23:32:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:32.732 23:32:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:14:32.732 23:32:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:32.732 23:32:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:32.732 [2024-09-30 23:32:12.384417] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:32.732 23:32:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:32.732 23:32:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:32.732 23:32:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:32.732 23:32:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:32.732 23:32:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:32.732 23:32:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:32.732 23:32:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:32.732 23:32:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:32.732 23:32:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:32.732 23:32:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:32.732 23:32:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:14:32.733 23:32:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:32.733 23:32:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:32.733 [2024-09-30 23:32:12.439561] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:14:32.733 [2024-09-30 23:32:12.439666] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline 00:14:32.733 23:32:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:32.733 23:32:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:32.733 23:32:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:32.733 23:32:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:32.733 23:32:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:14:32.733 23:32:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:32.733 23:32:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:32.733 23:32:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:32.733 23:32:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:14:32.733 23:32:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:14:32.733 23:32:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:14:32.733 23:32:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:14:32.733 23:32:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:32.733 23:32:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:14:32.733 23:32:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:32.733 23:32:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:32.733 BaseBdev2 00:14:32.733 23:32:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:32.733 23:32:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:14:32.733 23:32:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:14:32.733 23:32:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:32.733 23:32:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:14:32.733 23:32:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:32.733 23:32:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:32.733 23:32:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:14:32.733 23:32:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:32.733 23:32:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:32.733 23:32:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:32.733 23:32:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:32.733 23:32:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:32.733 23:32:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:32.733 [ 00:14:32.733 { 00:14:32.733 "name": "BaseBdev2", 00:14:32.733 "aliases": [ 00:14:32.733 "0e1094d1-3659-491c-95a5-b80b70e14c7e" 00:14:32.733 ], 00:14:32.733 "product_name": "Malloc disk", 00:14:32.733 "block_size": 512, 00:14:32.733 "num_blocks": 65536, 00:14:32.733 "uuid": "0e1094d1-3659-491c-95a5-b80b70e14c7e", 00:14:32.733 "assigned_rate_limits": { 00:14:32.733 "rw_ios_per_sec": 0, 00:14:32.733 "rw_mbytes_per_sec": 0, 00:14:32.733 "r_mbytes_per_sec": 0, 00:14:32.733 "w_mbytes_per_sec": 0 00:14:32.733 }, 00:14:32.733 "claimed": false, 00:14:32.733 "zoned": false, 00:14:32.733 "supported_io_types": { 00:14:32.733 "read": true, 00:14:32.733 "write": true, 00:14:32.733 "unmap": true, 00:14:32.733 "flush": true, 00:14:32.733 "reset": true, 00:14:32.733 "nvme_admin": false, 00:14:32.733 "nvme_io": false, 00:14:32.733 "nvme_io_md": false, 00:14:32.733 "write_zeroes": true, 00:14:32.733 "zcopy": true, 00:14:32.733 "get_zone_info": false, 00:14:32.733 "zone_management": false, 00:14:32.733 "zone_append": false, 00:14:32.733 "compare": false, 00:14:32.733 "compare_and_write": false, 00:14:32.733 "abort": true, 00:14:32.733 "seek_hole": false, 00:14:32.733 "seek_data": false, 00:14:32.733 "copy": true, 00:14:32.733 "nvme_iov_md": false 00:14:32.733 }, 00:14:32.733 "memory_domains": [ 00:14:32.733 { 00:14:32.733 "dma_device_id": "system", 00:14:32.733 "dma_device_type": 1 00:14:32.733 }, 00:14:32.733 { 00:14:32.733 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:32.733 "dma_device_type": 2 00:14:32.733 } 00:14:32.733 ], 00:14:32.733 "driver_specific": {} 00:14:32.733 } 00:14:32.733 ] 00:14:32.733 23:32:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:32.733 23:32:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:14:32.733 23:32:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:32.733 23:32:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:32.733 23:32:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:14:32.733 23:32:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:32.733 23:32:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:32.733 BaseBdev3 00:14:32.733 23:32:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:32.733 23:32:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:14:32.733 23:32:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:14:32.733 23:32:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:32.733 23:32:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:14:32.733 23:32:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:32.733 23:32:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:32.733 23:32:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:14:32.733 23:32:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:32.733 23:32:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:32.733 23:32:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:32.733 23:32:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:32.733 23:32:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:32.733 23:32:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:32.993 [ 00:14:32.993 { 00:14:32.993 "name": "BaseBdev3", 00:14:32.993 "aliases": [ 00:14:32.993 "1f4e09d9-20f7-49bf-bb52-31415d8c6344" 00:14:32.993 ], 00:14:32.993 "product_name": "Malloc disk", 00:14:32.993 "block_size": 512, 00:14:32.993 "num_blocks": 65536, 00:14:32.993 "uuid": "1f4e09d9-20f7-49bf-bb52-31415d8c6344", 00:14:32.993 "assigned_rate_limits": { 00:14:32.993 "rw_ios_per_sec": 0, 00:14:32.993 "rw_mbytes_per_sec": 0, 00:14:32.993 "r_mbytes_per_sec": 0, 00:14:32.993 "w_mbytes_per_sec": 0 00:14:32.993 }, 00:14:32.993 "claimed": false, 00:14:32.993 "zoned": false, 00:14:32.993 "supported_io_types": { 00:14:32.993 "read": true, 00:14:32.993 "write": true, 00:14:32.993 "unmap": true, 00:14:32.993 "flush": true, 00:14:32.993 "reset": true, 00:14:32.993 "nvme_admin": false, 00:14:32.993 "nvme_io": false, 00:14:32.993 "nvme_io_md": false, 00:14:32.993 "write_zeroes": true, 00:14:32.993 "zcopy": true, 00:14:32.993 "get_zone_info": false, 00:14:32.993 "zone_management": false, 00:14:32.993 "zone_append": false, 00:14:32.993 "compare": false, 00:14:32.993 "compare_and_write": false, 00:14:32.993 "abort": true, 00:14:32.993 "seek_hole": false, 00:14:32.993 "seek_data": false, 00:14:32.993 "copy": true, 00:14:32.993 "nvme_iov_md": false 00:14:32.993 }, 00:14:32.993 "memory_domains": [ 00:14:32.993 { 00:14:32.993 "dma_device_id": "system", 00:14:32.993 "dma_device_type": 1 00:14:32.993 }, 00:14:32.993 { 00:14:32.993 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:32.993 "dma_device_type": 2 00:14:32.993 } 00:14:32.993 ], 00:14:32.993 "driver_specific": {} 00:14:32.993 } 00:14:32.993 ] 00:14:32.993 23:32:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:32.993 23:32:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:14:32.993 23:32:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:32.993 23:32:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:32.993 23:32:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:14:32.993 23:32:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:32.993 23:32:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:32.993 BaseBdev4 00:14:32.993 23:32:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:32.993 23:32:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:14:32.993 23:32:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:14:32.993 23:32:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:32.993 23:32:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:14:32.993 23:32:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:32.993 23:32:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:32.993 23:32:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:14:32.993 23:32:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:32.993 23:32:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:32.993 23:32:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:32.993 23:32:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:14:32.993 23:32:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:32.993 23:32:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:32.993 [ 00:14:32.993 { 00:14:32.993 "name": "BaseBdev4", 00:14:32.993 "aliases": [ 00:14:32.994 "a48bc9fe-8bf2-4f39-a6ee-d3c1dd7cb6e7" 00:14:32.994 ], 00:14:32.994 "product_name": "Malloc disk", 00:14:32.994 "block_size": 512, 00:14:32.994 "num_blocks": 65536, 00:14:32.994 "uuid": "a48bc9fe-8bf2-4f39-a6ee-d3c1dd7cb6e7", 00:14:32.994 "assigned_rate_limits": { 00:14:32.994 "rw_ios_per_sec": 0, 00:14:32.994 "rw_mbytes_per_sec": 0, 00:14:32.994 "r_mbytes_per_sec": 0, 00:14:32.994 "w_mbytes_per_sec": 0 00:14:32.994 }, 00:14:32.994 "claimed": false, 00:14:32.994 "zoned": false, 00:14:32.994 "supported_io_types": { 00:14:32.994 "read": true, 00:14:32.994 "write": true, 00:14:32.994 "unmap": true, 00:14:32.994 "flush": true, 00:14:32.994 "reset": true, 00:14:32.994 "nvme_admin": false, 00:14:32.994 "nvme_io": false, 00:14:32.994 "nvme_io_md": false, 00:14:32.994 "write_zeroes": true, 00:14:32.994 "zcopy": true, 00:14:32.994 "get_zone_info": false, 00:14:32.994 "zone_management": false, 00:14:32.994 "zone_append": false, 00:14:32.994 "compare": false, 00:14:32.994 "compare_and_write": false, 00:14:32.994 "abort": true, 00:14:32.994 "seek_hole": false, 00:14:32.994 "seek_data": false, 00:14:32.994 "copy": true, 00:14:32.994 "nvme_iov_md": false 00:14:32.994 }, 00:14:32.994 "memory_domains": [ 00:14:32.994 { 00:14:32.994 "dma_device_id": "system", 00:14:32.994 "dma_device_type": 1 00:14:32.994 }, 00:14:32.994 { 00:14:32.994 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:32.994 "dma_device_type": 2 00:14:32.994 } 00:14:32.994 ], 00:14:32.994 "driver_specific": {} 00:14:32.994 } 00:14:32.994 ] 00:14:32.994 23:32:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:32.994 23:32:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:14:32.994 23:32:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:32.994 23:32:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:32.994 23:32:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:14:32.994 23:32:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:32.994 23:32:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:32.994 [2024-09-30 23:32:12.666531] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:32.994 [2024-09-30 23:32:12.666646] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:32.994 [2024-09-30 23:32:12.666701] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:32.994 [2024-09-30 23:32:12.668498] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:32.994 [2024-09-30 23:32:12.668586] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:32.994 23:32:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:32.994 23:32:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:32.994 23:32:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:32.994 23:32:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:32.994 23:32:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:32.994 23:32:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:32.994 23:32:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:32.994 23:32:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:32.994 23:32:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:32.994 23:32:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:32.994 23:32:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:32.994 23:32:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:32.994 23:32:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:32.994 23:32:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:32.994 23:32:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:32.994 23:32:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:32.994 23:32:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:32.994 "name": "Existed_Raid", 00:14:32.994 "uuid": "50336d50-967d-42ab-a3f1-8dbd7651206d", 00:14:32.994 "strip_size_kb": 64, 00:14:32.994 "state": "configuring", 00:14:32.994 "raid_level": "raid5f", 00:14:32.994 "superblock": true, 00:14:32.994 "num_base_bdevs": 4, 00:14:32.994 "num_base_bdevs_discovered": 3, 00:14:32.994 "num_base_bdevs_operational": 4, 00:14:32.994 "base_bdevs_list": [ 00:14:32.994 { 00:14:32.994 "name": "BaseBdev1", 00:14:32.994 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:32.994 "is_configured": false, 00:14:32.994 "data_offset": 0, 00:14:32.994 "data_size": 0 00:14:32.994 }, 00:14:32.994 { 00:14:32.994 "name": "BaseBdev2", 00:14:32.994 "uuid": "0e1094d1-3659-491c-95a5-b80b70e14c7e", 00:14:32.994 "is_configured": true, 00:14:32.994 "data_offset": 2048, 00:14:32.994 "data_size": 63488 00:14:32.994 }, 00:14:32.994 { 00:14:32.994 "name": "BaseBdev3", 00:14:32.994 "uuid": "1f4e09d9-20f7-49bf-bb52-31415d8c6344", 00:14:32.994 "is_configured": true, 00:14:32.994 "data_offset": 2048, 00:14:32.994 "data_size": 63488 00:14:32.994 }, 00:14:32.994 { 00:14:32.994 "name": "BaseBdev4", 00:14:32.994 "uuid": "a48bc9fe-8bf2-4f39-a6ee-d3c1dd7cb6e7", 00:14:32.994 "is_configured": true, 00:14:32.994 "data_offset": 2048, 00:14:32.994 "data_size": 63488 00:14:32.994 } 00:14:32.994 ] 00:14:32.994 }' 00:14:32.994 23:32:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:32.994 23:32:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:33.254 23:32:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:14:33.254 23:32:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:33.254 23:32:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:33.254 [2024-09-30 23:32:13.069819] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:33.254 23:32:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:33.254 23:32:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:33.254 23:32:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:33.254 23:32:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:33.254 23:32:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:33.254 23:32:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:33.254 23:32:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:33.254 23:32:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:33.254 23:32:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:33.254 23:32:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:33.254 23:32:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:33.254 23:32:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:33.254 23:32:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:33.254 23:32:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:33.254 23:32:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:33.254 23:32:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:33.514 23:32:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:33.514 "name": "Existed_Raid", 00:14:33.514 "uuid": "50336d50-967d-42ab-a3f1-8dbd7651206d", 00:14:33.514 "strip_size_kb": 64, 00:14:33.514 "state": "configuring", 00:14:33.514 "raid_level": "raid5f", 00:14:33.514 "superblock": true, 00:14:33.514 "num_base_bdevs": 4, 00:14:33.514 "num_base_bdevs_discovered": 2, 00:14:33.514 "num_base_bdevs_operational": 4, 00:14:33.514 "base_bdevs_list": [ 00:14:33.514 { 00:14:33.514 "name": "BaseBdev1", 00:14:33.514 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:33.514 "is_configured": false, 00:14:33.514 "data_offset": 0, 00:14:33.514 "data_size": 0 00:14:33.514 }, 00:14:33.514 { 00:14:33.514 "name": null, 00:14:33.514 "uuid": "0e1094d1-3659-491c-95a5-b80b70e14c7e", 00:14:33.514 "is_configured": false, 00:14:33.514 "data_offset": 0, 00:14:33.514 "data_size": 63488 00:14:33.514 }, 00:14:33.514 { 00:14:33.514 "name": "BaseBdev3", 00:14:33.514 "uuid": "1f4e09d9-20f7-49bf-bb52-31415d8c6344", 00:14:33.514 "is_configured": true, 00:14:33.514 "data_offset": 2048, 00:14:33.514 "data_size": 63488 00:14:33.514 }, 00:14:33.514 { 00:14:33.514 "name": "BaseBdev4", 00:14:33.514 "uuid": "a48bc9fe-8bf2-4f39-a6ee-d3c1dd7cb6e7", 00:14:33.514 "is_configured": true, 00:14:33.514 "data_offset": 2048, 00:14:33.514 "data_size": 63488 00:14:33.514 } 00:14:33.514 ] 00:14:33.514 }' 00:14:33.514 23:32:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:33.514 23:32:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:33.774 23:32:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:33.774 23:32:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:33.774 23:32:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:33.774 23:32:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:33.774 23:32:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:33.774 23:32:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:14:33.774 23:32:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:14:33.774 23:32:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:33.774 23:32:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:33.774 [2024-09-30 23:32:13.540014] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:33.774 BaseBdev1 00:14:33.774 23:32:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:33.774 23:32:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:14:33.775 23:32:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:14:33.775 23:32:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:33.775 23:32:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:14:33.775 23:32:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:33.775 23:32:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:33.775 23:32:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:14:33.775 23:32:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:33.775 23:32:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:33.775 23:32:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:33.775 23:32:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:33.775 23:32:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:33.775 23:32:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:33.775 [ 00:14:33.775 { 00:14:33.775 "name": "BaseBdev1", 00:14:33.775 "aliases": [ 00:14:33.775 "b2e5423b-d2fb-437c-94eb-01d480815fa3" 00:14:33.775 ], 00:14:33.775 "product_name": "Malloc disk", 00:14:33.775 "block_size": 512, 00:14:33.775 "num_blocks": 65536, 00:14:33.775 "uuid": "b2e5423b-d2fb-437c-94eb-01d480815fa3", 00:14:33.775 "assigned_rate_limits": { 00:14:33.775 "rw_ios_per_sec": 0, 00:14:33.775 "rw_mbytes_per_sec": 0, 00:14:33.775 "r_mbytes_per_sec": 0, 00:14:33.775 "w_mbytes_per_sec": 0 00:14:33.775 }, 00:14:33.775 "claimed": true, 00:14:33.775 "claim_type": "exclusive_write", 00:14:33.775 "zoned": false, 00:14:33.775 "supported_io_types": { 00:14:33.775 "read": true, 00:14:33.775 "write": true, 00:14:33.775 "unmap": true, 00:14:33.775 "flush": true, 00:14:33.775 "reset": true, 00:14:33.775 "nvme_admin": false, 00:14:33.775 "nvme_io": false, 00:14:33.775 "nvme_io_md": false, 00:14:33.775 "write_zeroes": true, 00:14:33.775 "zcopy": true, 00:14:33.775 "get_zone_info": false, 00:14:33.775 "zone_management": false, 00:14:33.775 "zone_append": false, 00:14:33.775 "compare": false, 00:14:33.775 "compare_and_write": false, 00:14:33.775 "abort": true, 00:14:33.775 "seek_hole": false, 00:14:33.775 "seek_data": false, 00:14:33.775 "copy": true, 00:14:33.775 "nvme_iov_md": false 00:14:33.775 }, 00:14:33.775 "memory_domains": [ 00:14:33.775 { 00:14:33.775 "dma_device_id": "system", 00:14:33.775 "dma_device_type": 1 00:14:33.775 }, 00:14:33.775 { 00:14:33.775 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:33.775 "dma_device_type": 2 00:14:33.775 } 00:14:33.775 ], 00:14:33.775 "driver_specific": {} 00:14:33.775 } 00:14:33.775 ] 00:14:33.775 23:32:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:33.775 23:32:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:14:33.775 23:32:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:33.775 23:32:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:33.775 23:32:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:33.775 23:32:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:33.775 23:32:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:33.775 23:32:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:33.775 23:32:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:33.775 23:32:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:33.775 23:32:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:33.775 23:32:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:33.775 23:32:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:33.775 23:32:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:33.775 23:32:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:33.775 23:32:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:33.775 23:32:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:33.775 23:32:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:33.775 "name": "Existed_Raid", 00:14:33.775 "uuid": "50336d50-967d-42ab-a3f1-8dbd7651206d", 00:14:33.775 "strip_size_kb": 64, 00:14:33.775 "state": "configuring", 00:14:33.775 "raid_level": "raid5f", 00:14:33.775 "superblock": true, 00:14:33.775 "num_base_bdevs": 4, 00:14:33.775 "num_base_bdevs_discovered": 3, 00:14:33.775 "num_base_bdevs_operational": 4, 00:14:33.775 "base_bdevs_list": [ 00:14:33.775 { 00:14:33.775 "name": "BaseBdev1", 00:14:33.775 "uuid": "b2e5423b-d2fb-437c-94eb-01d480815fa3", 00:14:33.775 "is_configured": true, 00:14:33.775 "data_offset": 2048, 00:14:33.775 "data_size": 63488 00:14:33.775 }, 00:14:33.775 { 00:14:33.775 "name": null, 00:14:33.775 "uuid": "0e1094d1-3659-491c-95a5-b80b70e14c7e", 00:14:33.775 "is_configured": false, 00:14:33.775 "data_offset": 0, 00:14:33.775 "data_size": 63488 00:14:33.775 }, 00:14:33.775 { 00:14:33.775 "name": "BaseBdev3", 00:14:33.775 "uuid": "1f4e09d9-20f7-49bf-bb52-31415d8c6344", 00:14:33.775 "is_configured": true, 00:14:33.775 "data_offset": 2048, 00:14:33.775 "data_size": 63488 00:14:33.775 }, 00:14:33.775 { 00:14:33.775 "name": "BaseBdev4", 00:14:33.775 "uuid": "a48bc9fe-8bf2-4f39-a6ee-d3c1dd7cb6e7", 00:14:33.775 "is_configured": true, 00:14:33.775 "data_offset": 2048, 00:14:33.775 "data_size": 63488 00:14:33.775 } 00:14:33.775 ] 00:14:33.775 }' 00:14:33.775 23:32:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:33.775 23:32:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:34.345 23:32:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:34.345 23:32:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:34.345 23:32:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:34.345 23:32:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:34.345 23:32:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:34.345 23:32:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:14:34.345 23:32:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:14:34.345 23:32:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:34.345 23:32:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:34.345 [2024-09-30 23:32:13.991345] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:34.345 23:32:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:34.345 23:32:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:34.345 23:32:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:34.345 23:32:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:34.345 23:32:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:34.345 23:32:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:34.345 23:32:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:34.345 23:32:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:34.345 23:32:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:34.345 23:32:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:34.345 23:32:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:34.345 23:32:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:34.345 23:32:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:34.345 23:32:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:34.345 23:32:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:34.345 23:32:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:34.345 23:32:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:34.345 "name": "Existed_Raid", 00:14:34.345 "uuid": "50336d50-967d-42ab-a3f1-8dbd7651206d", 00:14:34.345 "strip_size_kb": 64, 00:14:34.345 "state": "configuring", 00:14:34.345 "raid_level": "raid5f", 00:14:34.345 "superblock": true, 00:14:34.345 "num_base_bdevs": 4, 00:14:34.345 "num_base_bdevs_discovered": 2, 00:14:34.345 "num_base_bdevs_operational": 4, 00:14:34.345 "base_bdevs_list": [ 00:14:34.345 { 00:14:34.345 "name": "BaseBdev1", 00:14:34.345 "uuid": "b2e5423b-d2fb-437c-94eb-01d480815fa3", 00:14:34.345 "is_configured": true, 00:14:34.345 "data_offset": 2048, 00:14:34.345 "data_size": 63488 00:14:34.345 }, 00:14:34.345 { 00:14:34.345 "name": null, 00:14:34.345 "uuid": "0e1094d1-3659-491c-95a5-b80b70e14c7e", 00:14:34.345 "is_configured": false, 00:14:34.345 "data_offset": 0, 00:14:34.345 "data_size": 63488 00:14:34.345 }, 00:14:34.345 { 00:14:34.345 "name": null, 00:14:34.345 "uuid": "1f4e09d9-20f7-49bf-bb52-31415d8c6344", 00:14:34.345 "is_configured": false, 00:14:34.345 "data_offset": 0, 00:14:34.345 "data_size": 63488 00:14:34.345 }, 00:14:34.345 { 00:14:34.345 "name": "BaseBdev4", 00:14:34.345 "uuid": "a48bc9fe-8bf2-4f39-a6ee-d3c1dd7cb6e7", 00:14:34.345 "is_configured": true, 00:14:34.345 "data_offset": 2048, 00:14:34.345 "data_size": 63488 00:14:34.345 } 00:14:34.345 ] 00:14:34.345 }' 00:14:34.345 23:32:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:34.346 23:32:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:34.606 23:32:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:34.606 23:32:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:34.606 23:32:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:34.606 23:32:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:34.606 23:32:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:34.606 23:32:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:14:34.606 23:32:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:14:34.606 23:32:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:34.606 23:32:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:34.606 [2024-09-30 23:32:14.414653] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:34.606 23:32:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:34.606 23:32:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:34.606 23:32:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:34.606 23:32:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:34.606 23:32:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:34.606 23:32:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:34.606 23:32:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:34.606 23:32:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:34.606 23:32:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:34.606 23:32:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:34.606 23:32:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:34.606 23:32:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:34.606 23:32:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:34.606 23:32:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:34.606 23:32:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:34.606 23:32:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:34.606 23:32:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:34.606 "name": "Existed_Raid", 00:14:34.606 "uuid": "50336d50-967d-42ab-a3f1-8dbd7651206d", 00:14:34.606 "strip_size_kb": 64, 00:14:34.606 "state": "configuring", 00:14:34.606 "raid_level": "raid5f", 00:14:34.606 "superblock": true, 00:14:34.606 "num_base_bdevs": 4, 00:14:34.606 "num_base_bdevs_discovered": 3, 00:14:34.606 "num_base_bdevs_operational": 4, 00:14:34.606 "base_bdevs_list": [ 00:14:34.606 { 00:14:34.606 "name": "BaseBdev1", 00:14:34.606 "uuid": "b2e5423b-d2fb-437c-94eb-01d480815fa3", 00:14:34.606 "is_configured": true, 00:14:34.606 "data_offset": 2048, 00:14:34.606 "data_size": 63488 00:14:34.606 }, 00:14:34.606 { 00:14:34.606 "name": null, 00:14:34.606 "uuid": "0e1094d1-3659-491c-95a5-b80b70e14c7e", 00:14:34.606 "is_configured": false, 00:14:34.606 "data_offset": 0, 00:14:34.606 "data_size": 63488 00:14:34.606 }, 00:14:34.606 { 00:14:34.606 "name": "BaseBdev3", 00:14:34.606 "uuid": "1f4e09d9-20f7-49bf-bb52-31415d8c6344", 00:14:34.606 "is_configured": true, 00:14:34.606 "data_offset": 2048, 00:14:34.606 "data_size": 63488 00:14:34.606 }, 00:14:34.606 { 00:14:34.606 "name": "BaseBdev4", 00:14:34.606 "uuid": "a48bc9fe-8bf2-4f39-a6ee-d3c1dd7cb6e7", 00:14:34.606 "is_configured": true, 00:14:34.606 "data_offset": 2048, 00:14:34.606 "data_size": 63488 00:14:34.606 } 00:14:34.606 ] 00:14:34.606 }' 00:14:34.606 23:32:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:34.606 23:32:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:35.176 23:32:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:35.177 23:32:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:35.177 23:32:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:35.177 23:32:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:35.177 23:32:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:35.177 23:32:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:14:35.177 23:32:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:14:35.177 23:32:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:35.177 23:32:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:35.177 [2024-09-30 23:32:14.893855] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:35.177 23:32:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:35.177 23:32:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:35.177 23:32:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:35.177 23:32:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:35.177 23:32:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:35.177 23:32:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:35.177 23:32:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:35.177 23:32:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:35.177 23:32:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:35.177 23:32:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:35.177 23:32:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:35.177 23:32:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:35.177 23:32:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:35.177 23:32:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:35.177 23:32:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:35.177 23:32:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:35.177 23:32:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:35.177 "name": "Existed_Raid", 00:14:35.177 "uuid": "50336d50-967d-42ab-a3f1-8dbd7651206d", 00:14:35.177 "strip_size_kb": 64, 00:14:35.177 "state": "configuring", 00:14:35.177 "raid_level": "raid5f", 00:14:35.177 "superblock": true, 00:14:35.177 "num_base_bdevs": 4, 00:14:35.177 "num_base_bdevs_discovered": 2, 00:14:35.177 "num_base_bdevs_operational": 4, 00:14:35.177 "base_bdevs_list": [ 00:14:35.177 { 00:14:35.177 "name": null, 00:14:35.177 "uuid": "b2e5423b-d2fb-437c-94eb-01d480815fa3", 00:14:35.177 "is_configured": false, 00:14:35.177 "data_offset": 0, 00:14:35.177 "data_size": 63488 00:14:35.177 }, 00:14:35.177 { 00:14:35.177 "name": null, 00:14:35.177 "uuid": "0e1094d1-3659-491c-95a5-b80b70e14c7e", 00:14:35.177 "is_configured": false, 00:14:35.177 "data_offset": 0, 00:14:35.177 "data_size": 63488 00:14:35.177 }, 00:14:35.177 { 00:14:35.177 "name": "BaseBdev3", 00:14:35.177 "uuid": "1f4e09d9-20f7-49bf-bb52-31415d8c6344", 00:14:35.177 "is_configured": true, 00:14:35.177 "data_offset": 2048, 00:14:35.177 "data_size": 63488 00:14:35.177 }, 00:14:35.177 { 00:14:35.177 "name": "BaseBdev4", 00:14:35.177 "uuid": "a48bc9fe-8bf2-4f39-a6ee-d3c1dd7cb6e7", 00:14:35.177 "is_configured": true, 00:14:35.177 "data_offset": 2048, 00:14:35.177 "data_size": 63488 00:14:35.177 } 00:14:35.177 ] 00:14:35.177 }' 00:14:35.177 23:32:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:35.177 23:32:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:35.748 23:32:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:35.748 23:32:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:35.748 23:32:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:35.748 23:32:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:35.748 23:32:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:35.748 23:32:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:14:35.748 23:32:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:14:35.748 23:32:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:35.748 23:32:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:35.748 [2024-09-30 23:32:15.395439] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:35.748 23:32:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:35.748 23:32:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:35.748 23:32:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:35.748 23:32:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:35.748 23:32:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:35.748 23:32:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:35.748 23:32:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:35.748 23:32:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:35.748 23:32:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:35.748 23:32:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:35.748 23:32:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:35.748 23:32:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:35.748 23:32:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:35.748 23:32:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:35.748 23:32:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:35.748 23:32:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:35.748 23:32:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:35.748 "name": "Existed_Raid", 00:14:35.748 "uuid": "50336d50-967d-42ab-a3f1-8dbd7651206d", 00:14:35.748 "strip_size_kb": 64, 00:14:35.748 "state": "configuring", 00:14:35.748 "raid_level": "raid5f", 00:14:35.748 "superblock": true, 00:14:35.748 "num_base_bdevs": 4, 00:14:35.748 "num_base_bdevs_discovered": 3, 00:14:35.748 "num_base_bdevs_operational": 4, 00:14:35.748 "base_bdevs_list": [ 00:14:35.748 { 00:14:35.748 "name": null, 00:14:35.748 "uuid": "b2e5423b-d2fb-437c-94eb-01d480815fa3", 00:14:35.748 "is_configured": false, 00:14:35.748 "data_offset": 0, 00:14:35.748 "data_size": 63488 00:14:35.748 }, 00:14:35.748 { 00:14:35.748 "name": "BaseBdev2", 00:14:35.748 "uuid": "0e1094d1-3659-491c-95a5-b80b70e14c7e", 00:14:35.748 "is_configured": true, 00:14:35.748 "data_offset": 2048, 00:14:35.748 "data_size": 63488 00:14:35.748 }, 00:14:35.748 { 00:14:35.748 "name": "BaseBdev3", 00:14:35.748 "uuid": "1f4e09d9-20f7-49bf-bb52-31415d8c6344", 00:14:35.748 "is_configured": true, 00:14:35.748 "data_offset": 2048, 00:14:35.748 "data_size": 63488 00:14:35.748 }, 00:14:35.748 { 00:14:35.748 "name": "BaseBdev4", 00:14:35.748 "uuid": "a48bc9fe-8bf2-4f39-a6ee-d3c1dd7cb6e7", 00:14:35.748 "is_configured": true, 00:14:35.748 "data_offset": 2048, 00:14:35.748 "data_size": 63488 00:14:35.748 } 00:14:35.748 ] 00:14:35.748 }' 00:14:35.748 23:32:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:35.748 23:32:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:36.008 23:32:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:36.008 23:32:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:36.008 23:32:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:36.008 23:32:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:36.269 23:32:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:36.269 23:32:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:14:36.269 23:32:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:36.269 23:32:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:36.269 23:32:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:36.269 23:32:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:14:36.269 23:32:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:36.269 23:32:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u b2e5423b-d2fb-437c-94eb-01d480815fa3 00:14:36.269 23:32:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:36.269 23:32:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:36.269 NewBaseBdev 00:14:36.269 [2024-09-30 23:32:15.969393] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:14:36.269 [2024-09-30 23:32:15.969563] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:14:36.269 [2024-09-30 23:32:15.969576] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:14:36.269 [2024-09-30 23:32:15.969809] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:14:36.269 [2024-09-30 23:32:15.970252] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:14:36.269 [2024-09-30 23:32:15.970266] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006d00 00:14:36.269 [2024-09-30 23:32:15.970360] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:36.269 23:32:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:36.269 23:32:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:14:36.269 23:32:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:14:36.269 23:32:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:36.269 23:32:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:14:36.269 23:32:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:36.269 23:32:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:36.269 23:32:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:14:36.269 23:32:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:36.269 23:32:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:36.269 23:32:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:36.269 23:32:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:14:36.269 23:32:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:36.269 23:32:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:36.269 [ 00:14:36.269 { 00:14:36.269 "name": "NewBaseBdev", 00:14:36.269 "aliases": [ 00:14:36.269 "b2e5423b-d2fb-437c-94eb-01d480815fa3" 00:14:36.269 ], 00:14:36.269 "product_name": "Malloc disk", 00:14:36.269 "block_size": 512, 00:14:36.269 "num_blocks": 65536, 00:14:36.269 "uuid": "b2e5423b-d2fb-437c-94eb-01d480815fa3", 00:14:36.269 "assigned_rate_limits": { 00:14:36.269 "rw_ios_per_sec": 0, 00:14:36.269 "rw_mbytes_per_sec": 0, 00:14:36.269 "r_mbytes_per_sec": 0, 00:14:36.269 "w_mbytes_per_sec": 0 00:14:36.269 }, 00:14:36.269 "claimed": true, 00:14:36.269 "claim_type": "exclusive_write", 00:14:36.269 "zoned": false, 00:14:36.269 "supported_io_types": { 00:14:36.269 "read": true, 00:14:36.269 "write": true, 00:14:36.269 "unmap": true, 00:14:36.269 "flush": true, 00:14:36.269 "reset": true, 00:14:36.269 "nvme_admin": false, 00:14:36.269 "nvme_io": false, 00:14:36.269 "nvme_io_md": false, 00:14:36.269 "write_zeroes": true, 00:14:36.269 "zcopy": true, 00:14:36.269 "get_zone_info": false, 00:14:36.269 "zone_management": false, 00:14:36.269 "zone_append": false, 00:14:36.269 "compare": false, 00:14:36.269 "compare_and_write": false, 00:14:36.269 "abort": true, 00:14:36.269 "seek_hole": false, 00:14:36.269 "seek_data": false, 00:14:36.269 "copy": true, 00:14:36.269 "nvme_iov_md": false 00:14:36.269 }, 00:14:36.269 "memory_domains": [ 00:14:36.269 { 00:14:36.269 "dma_device_id": "system", 00:14:36.269 "dma_device_type": 1 00:14:36.269 }, 00:14:36.269 { 00:14:36.269 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:36.269 "dma_device_type": 2 00:14:36.269 } 00:14:36.269 ], 00:14:36.269 "driver_specific": {} 00:14:36.269 } 00:14:36.269 ] 00:14:36.269 23:32:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:36.269 23:32:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:14:36.269 23:32:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:14:36.269 23:32:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:36.269 23:32:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:36.269 23:32:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:36.269 23:32:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:36.269 23:32:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:36.269 23:32:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:36.269 23:32:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:36.269 23:32:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:36.269 23:32:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:36.269 23:32:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:36.269 23:32:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:36.269 23:32:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:36.269 23:32:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:36.269 23:32:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:36.269 23:32:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:36.269 "name": "Existed_Raid", 00:14:36.269 "uuid": "50336d50-967d-42ab-a3f1-8dbd7651206d", 00:14:36.269 "strip_size_kb": 64, 00:14:36.269 "state": "online", 00:14:36.269 "raid_level": "raid5f", 00:14:36.269 "superblock": true, 00:14:36.269 "num_base_bdevs": 4, 00:14:36.269 "num_base_bdevs_discovered": 4, 00:14:36.269 "num_base_bdevs_operational": 4, 00:14:36.269 "base_bdevs_list": [ 00:14:36.269 { 00:14:36.269 "name": "NewBaseBdev", 00:14:36.269 "uuid": "b2e5423b-d2fb-437c-94eb-01d480815fa3", 00:14:36.269 "is_configured": true, 00:14:36.269 "data_offset": 2048, 00:14:36.269 "data_size": 63488 00:14:36.269 }, 00:14:36.269 { 00:14:36.269 "name": "BaseBdev2", 00:14:36.269 "uuid": "0e1094d1-3659-491c-95a5-b80b70e14c7e", 00:14:36.269 "is_configured": true, 00:14:36.269 "data_offset": 2048, 00:14:36.269 "data_size": 63488 00:14:36.269 }, 00:14:36.269 { 00:14:36.269 "name": "BaseBdev3", 00:14:36.270 "uuid": "1f4e09d9-20f7-49bf-bb52-31415d8c6344", 00:14:36.270 "is_configured": true, 00:14:36.270 "data_offset": 2048, 00:14:36.270 "data_size": 63488 00:14:36.270 }, 00:14:36.270 { 00:14:36.270 "name": "BaseBdev4", 00:14:36.270 "uuid": "a48bc9fe-8bf2-4f39-a6ee-d3c1dd7cb6e7", 00:14:36.270 "is_configured": true, 00:14:36.270 "data_offset": 2048, 00:14:36.270 "data_size": 63488 00:14:36.270 } 00:14:36.270 ] 00:14:36.270 }' 00:14:36.270 23:32:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:36.270 23:32:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:36.840 23:32:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:14:36.840 23:32:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:14:36.840 23:32:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:36.840 23:32:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:36.840 23:32:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:14:36.840 23:32:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:36.840 23:32:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:36.840 23:32:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:14:36.840 23:32:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:36.840 23:32:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:36.840 [2024-09-30 23:32:16.412832] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:36.840 23:32:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:36.840 23:32:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:36.840 "name": "Existed_Raid", 00:14:36.840 "aliases": [ 00:14:36.840 "50336d50-967d-42ab-a3f1-8dbd7651206d" 00:14:36.840 ], 00:14:36.840 "product_name": "Raid Volume", 00:14:36.840 "block_size": 512, 00:14:36.840 "num_blocks": 190464, 00:14:36.840 "uuid": "50336d50-967d-42ab-a3f1-8dbd7651206d", 00:14:36.840 "assigned_rate_limits": { 00:14:36.840 "rw_ios_per_sec": 0, 00:14:36.840 "rw_mbytes_per_sec": 0, 00:14:36.840 "r_mbytes_per_sec": 0, 00:14:36.840 "w_mbytes_per_sec": 0 00:14:36.840 }, 00:14:36.840 "claimed": false, 00:14:36.840 "zoned": false, 00:14:36.840 "supported_io_types": { 00:14:36.840 "read": true, 00:14:36.840 "write": true, 00:14:36.840 "unmap": false, 00:14:36.840 "flush": false, 00:14:36.840 "reset": true, 00:14:36.840 "nvme_admin": false, 00:14:36.840 "nvme_io": false, 00:14:36.840 "nvme_io_md": false, 00:14:36.840 "write_zeroes": true, 00:14:36.840 "zcopy": false, 00:14:36.840 "get_zone_info": false, 00:14:36.840 "zone_management": false, 00:14:36.840 "zone_append": false, 00:14:36.840 "compare": false, 00:14:36.840 "compare_and_write": false, 00:14:36.840 "abort": false, 00:14:36.840 "seek_hole": false, 00:14:36.840 "seek_data": false, 00:14:36.840 "copy": false, 00:14:36.840 "nvme_iov_md": false 00:14:36.840 }, 00:14:36.840 "driver_specific": { 00:14:36.840 "raid": { 00:14:36.840 "uuid": "50336d50-967d-42ab-a3f1-8dbd7651206d", 00:14:36.840 "strip_size_kb": 64, 00:14:36.840 "state": "online", 00:14:36.840 "raid_level": "raid5f", 00:14:36.840 "superblock": true, 00:14:36.840 "num_base_bdevs": 4, 00:14:36.840 "num_base_bdevs_discovered": 4, 00:14:36.840 "num_base_bdevs_operational": 4, 00:14:36.840 "base_bdevs_list": [ 00:14:36.840 { 00:14:36.840 "name": "NewBaseBdev", 00:14:36.840 "uuid": "b2e5423b-d2fb-437c-94eb-01d480815fa3", 00:14:36.840 "is_configured": true, 00:14:36.840 "data_offset": 2048, 00:14:36.840 "data_size": 63488 00:14:36.840 }, 00:14:36.840 { 00:14:36.840 "name": "BaseBdev2", 00:14:36.840 "uuid": "0e1094d1-3659-491c-95a5-b80b70e14c7e", 00:14:36.840 "is_configured": true, 00:14:36.840 "data_offset": 2048, 00:14:36.840 "data_size": 63488 00:14:36.840 }, 00:14:36.840 { 00:14:36.841 "name": "BaseBdev3", 00:14:36.841 "uuid": "1f4e09d9-20f7-49bf-bb52-31415d8c6344", 00:14:36.841 "is_configured": true, 00:14:36.841 "data_offset": 2048, 00:14:36.841 "data_size": 63488 00:14:36.841 }, 00:14:36.841 { 00:14:36.841 "name": "BaseBdev4", 00:14:36.841 "uuid": "a48bc9fe-8bf2-4f39-a6ee-d3c1dd7cb6e7", 00:14:36.841 "is_configured": true, 00:14:36.841 "data_offset": 2048, 00:14:36.841 "data_size": 63488 00:14:36.841 } 00:14:36.841 ] 00:14:36.841 } 00:14:36.841 } 00:14:36.841 }' 00:14:36.841 23:32:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:36.841 23:32:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:14:36.841 BaseBdev2 00:14:36.841 BaseBdev3 00:14:36.841 BaseBdev4' 00:14:36.841 23:32:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:36.841 23:32:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:36.841 23:32:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:36.841 23:32:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:14:36.841 23:32:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:36.841 23:32:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:36.841 23:32:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:36.841 23:32:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:36.841 23:32:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:36.841 23:32:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:36.841 23:32:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:36.841 23:32:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:14:36.841 23:32:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:36.841 23:32:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:36.841 23:32:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:36.841 23:32:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:36.841 23:32:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:36.841 23:32:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:36.841 23:32:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:36.841 23:32:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:14:36.841 23:32:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:36.841 23:32:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:36.841 23:32:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:36.841 23:32:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:36.841 23:32:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:36.841 23:32:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:36.841 23:32:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:36.841 23:32:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:14:36.841 23:32:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:36.841 23:32:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:36.841 23:32:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:36.841 23:32:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:37.102 23:32:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:37.102 23:32:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:37.102 23:32:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:37.102 23:32:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:37.102 23:32:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:37.102 [2024-09-30 23:32:16.708122] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:37.102 [2024-09-30 23:32:16.708194] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:37.102 [2024-09-30 23:32:16.708290] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:37.102 [2024-09-30 23:32:16.708571] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:37.102 [2024-09-30 23:32:16.708630] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name Existed_Raid, state offline 00:14:37.102 23:32:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:37.102 23:32:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 93940 00:14:37.102 23:32:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 93940 ']' 00:14:37.102 23:32:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 93940 00:14:37.102 23:32:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:14:37.102 23:32:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:37.102 23:32:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 93940 00:14:37.102 23:32:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:37.102 23:32:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:37.102 23:32:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 93940' 00:14:37.102 killing process with pid 93940 00:14:37.102 23:32:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 93940 00:14:37.102 [2024-09-30 23:32:16.749731] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:37.102 23:32:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 93940 00:14:37.102 [2024-09-30 23:32:16.790451] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:37.363 23:32:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:14:37.363 00:14:37.363 real 0m9.217s 00:14:37.363 user 0m15.725s 00:14:37.363 sys 0m1.913s 00:14:37.363 23:32:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:37.363 ************************************ 00:14:37.363 END TEST raid5f_state_function_test_sb 00:14:37.363 ************************************ 00:14:37.363 23:32:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:37.363 23:32:17 bdev_raid -- bdev/bdev_raid.sh@988 -- # run_test raid5f_superblock_test raid_superblock_test raid5f 4 00:14:37.363 23:32:17 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:14:37.363 23:32:17 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:37.363 23:32:17 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:37.363 ************************************ 00:14:37.363 START TEST raid5f_superblock_test 00:14:37.363 ************************************ 00:14:37.363 23:32:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test raid5f 4 00:14:37.363 23:32:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid5f 00:14:37.363 23:32:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:14:37.363 23:32:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:14:37.363 23:32:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:14:37.363 23:32:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:14:37.363 23:32:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:14:37.363 23:32:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:14:37.363 23:32:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:14:37.363 23:32:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:14:37.363 23:32:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:14:37.363 23:32:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:14:37.363 23:32:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:14:37.363 23:32:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:14:37.363 23:32:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid5f '!=' raid1 ']' 00:14:37.363 23:32:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:14:37.363 23:32:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:14:37.363 23:32:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:14:37.363 23:32:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=94588 00:14:37.363 23:32:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 94588 00:14:37.363 23:32:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 94588 ']' 00:14:37.363 23:32:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:37.363 23:32:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:37.363 23:32:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:37.363 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:37.363 23:32:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:37.363 23:32:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:37.363 [2024-09-30 23:32:17.169792] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:14:37.363 [2024-09-30 23:32:17.170036] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94588 ] 00:14:37.623 [2024-09-30 23:32:17.312040] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:37.623 [2024-09-30 23:32:17.354799] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:14:37.623 [2024-09-30 23:32:17.397071] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:37.623 [2024-09-30 23:32:17.397108] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:38.194 23:32:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:38.194 23:32:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:14:38.194 23:32:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:14:38.194 23:32:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:38.194 23:32:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:14:38.194 23:32:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:14:38.194 23:32:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:14:38.194 23:32:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:38.194 23:32:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:14:38.194 23:32:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:38.194 23:32:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:14:38.194 23:32:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:38.194 23:32:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:38.194 malloc1 00:14:38.194 23:32:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:38.194 23:32:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:38.194 23:32:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:38.194 23:32:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:38.194 [2024-09-30 23:32:18.035394] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:38.194 [2024-09-30 23:32:18.035567] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:38.194 [2024-09-30 23:32:18.035606] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:14:38.194 [2024-09-30 23:32:18.035658] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:38.194 [2024-09-30 23:32:18.037699] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:38.194 [2024-09-30 23:32:18.037789] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:38.194 pt1 00:14:38.194 23:32:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:38.194 23:32:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:14:38.194 23:32:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:38.194 23:32:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:14:38.194 23:32:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:14:38.194 23:32:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:14:38.194 23:32:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:38.194 23:32:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:14:38.194 23:32:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:38.194 23:32:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:14:38.194 23:32:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:38.194 23:32:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:38.455 malloc2 00:14:38.455 23:32:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:38.455 23:32:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:38.455 23:32:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:38.455 23:32:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:38.455 [2024-09-30 23:32:18.076474] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:38.455 [2024-09-30 23:32:18.076591] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:38.455 [2024-09-30 23:32:18.076622] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:14:38.455 [2024-09-30 23:32:18.076655] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:38.455 [2024-09-30 23:32:18.078718] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:38.455 [2024-09-30 23:32:18.078786] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:38.455 pt2 00:14:38.455 23:32:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:38.455 23:32:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:14:38.455 23:32:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:38.455 23:32:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:14:38.455 23:32:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:14:38.455 23:32:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:14:38.455 23:32:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:38.455 23:32:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:14:38.455 23:32:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:38.455 23:32:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:14:38.455 23:32:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:38.455 23:32:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:38.455 malloc3 00:14:38.455 23:32:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:38.455 23:32:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:14:38.455 23:32:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:38.455 23:32:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:38.455 [2024-09-30 23:32:18.105005] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:14:38.455 [2024-09-30 23:32:18.105106] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:38.455 [2024-09-30 23:32:18.105155] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:14:38.455 [2024-09-30 23:32:18.105183] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:38.455 [2024-09-30 23:32:18.107261] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:38.455 [2024-09-30 23:32:18.107330] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:14:38.455 pt3 00:14:38.455 23:32:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:38.455 23:32:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:14:38.455 23:32:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:38.455 23:32:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:14:38.455 23:32:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:14:38.455 23:32:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:14:38.455 23:32:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:38.455 23:32:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:14:38.455 23:32:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:38.455 23:32:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:14:38.455 23:32:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:38.455 23:32:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:38.455 malloc4 00:14:38.455 23:32:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:38.455 23:32:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:14:38.455 23:32:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:38.455 23:32:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:38.455 [2024-09-30 23:32:18.137465] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:14:38.455 [2024-09-30 23:32:18.137579] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:38.455 [2024-09-30 23:32:18.137609] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:14:38.455 [2024-09-30 23:32:18.137639] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:38.455 [2024-09-30 23:32:18.139673] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:38.455 [2024-09-30 23:32:18.139757] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:14:38.455 pt4 00:14:38.455 23:32:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:38.455 23:32:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:14:38.455 23:32:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:38.455 23:32:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:14:38.455 23:32:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:38.455 23:32:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:38.455 [2024-09-30 23:32:18.149523] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:14:38.455 [2024-09-30 23:32:18.151381] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:38.455 [2024-09-30 23:32:18.151485] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:14:38.455 [2024-09-30 23:32:18.151563] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:14:38.455 [2024-09-30 23:32:18.151770] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:14:38.455 [2024-09-30 23:32:18.151829] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:14:38.455 [2024-09-30 23:32:18.152092] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:14:38.455 [2024-09-30 23:32:18.152568] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:14:38.455 [2024-09-30 23:32:18.152611] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:14:38.455 [2024-09-30 23:32:18.152774] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:38.455 23:32:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:38.455 23:32:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:14:38.455 23:32:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:38.455 23:32:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:38.455 23:32:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:38.455 23:32:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:38.455 23:32:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:38.455 23:32:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:38.455 23:32:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:38.455 23:32:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:38.455 23:32:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:38.455 23:32:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:38.455 23:32:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:38.455 23:32:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:38.455 23:32:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:38.455 23:32:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:38.455 23:32:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:38.455 "name": "raid_bdev1", 00:14:38.455 "uuid": "36fe2ebe-eafe-4931-9a60-268d657ccdfb", 00:14:38.455 "strip_size_kb": 64, 00:14:38.455 "state": "online", 00:14:38.455 "raid_level": "raid5f", 00:14:38.455 "superblock": true, 00:14:38.455 "num_base_bdevs": 4, 00:14:38.455 "num_base_bdevs_discovered": 4, 00:14:38.455 "num_base_bdevs_operational": 4, 00:14:38.455 "base_bdevs_list": [ 00:14:38.455 { 00:14:38.455 "name": "pt1", 00:14:38.455 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:38.455 "is_configured": true, 00:14:38.455 "data_offset": 2048, 00:14:38.455 "data_size": 63488 00:14:38.455 }, 00:14:38.455 { 00:14:38.455 "name": "pt2", 00:14:38.455 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:38.455 "is_configured": true, 00:14:38.455 "data_offset": 2048, 00:14:38.455 "data_size": 63488 00:14:38.455 }, 00:14:38.455 { 00:14:38.455 "name": "pt3", 00:14:38.455 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:38.455 "is_configured": true, 00:14:38.455 "data_offset": 2048, 00:14:38.455 "data_size": 63488 00:14:38.455 }, 00:14:38.455 { 00:14:38.455 "name": "pt4", 00:14:38.455 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:38.455 "is_configured": true, 00:14:38.455 "data_offset": 2048, 00:14:38.455 "data_size": 63488 00:14:38.455 } 00:14:38.455 ] 00:14:38.455 }' 00:14:38.455 23:32:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:38.455 23:32:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:38.715 23:32:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:14:38.715 23:32:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:14:38.715 23:32:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:38.715 23:32:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:38.715 23:32:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:14:38.715 23:32:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:38.716 23:32:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:38.716 23:32:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:38.716 23:32:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:38.716 23:32:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:38.716 [2024-09-30 23:32:18.553996] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:38.976 23:32:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:38.976 23:32:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:38.976 "name": "raid_bdev1", 00:14:38.976 "aliases": [ 00:14:38.976 "36fe2ebe-eafe-4931-9a60-268d657ccdfb" 00:14:38.976 ], 00:14:38.976 "product_name": "Raid Volume", 00:14:38.976 "block_size": 512, 00:14:38.976 "num_blocks": 190464, 00:14:38.976 "uuid": "36fe2ebe-eafe-4931-9a60-268d657ccdfb", 00:14:38.976 "assigned_rate_limits": { 00:14:38.976 "rw_ios_per_sec": 0, 00:14:38.976 "rw_mbytes_per_sec": 0, 00:14:38.976 "r_mbytes_per_sec": 0, 00:14:38.976 "w_mbytes_per_sec": 0 00:14:38.976 }, 00:14:38.976 "claimed": false, 00:14:38.976 "zoned": false, 00:14:38.976 "supported_io_types": { 00:14:38.976 "read": true, 00:14:38.976 "write": true, 00:14:38.976 "unmap": false, 00:14:38.976 "flush": false, 00:14:38.976 "reset": true, 00:14:38.976 "nvme_admin": false, 00:14:38.976 "nvme_io": false, 00:14:38.976 "nvme_io_md": false, 00:14:38.976 "write_zeroes": true, 00:14:38.976 "zcopy": false, 00:14:38.976 "get_zone_info": false, 00:14:38.976 "zone_management": false, 00:14:38.976 "zone_append": false, 00:14:38.976 "compare": false, 00:14:38.976 "compare_and_write": false, 00:14:38.976 "abort": false, 00:14:38.976 "seek_hole": false, 00:14:38.976 "seek_data": false, 00:14:38.976 "copy": false, 00:14:38.976 "nvme_iov_md": false 00:14:38.976 }, 00:14:38.976 "driver_specific": { 00:14:38.976 "raid": { 00:14:38.976 "uuid": "36fe2ebe-eafe-4931-9a60-268d657ccdfb", 00:14:38.976 "strip_size_kb": 64, 00:14:38.976 "state": "online", 00:14:38.976 "raid_level": "raid5f", 00:14:38.976 "superblock": true, 00:14:38.976 "num_base_bdevs": 4, 00:14:38.976 "num_base_bdevs_discovered": 4, 00:14:38.976 "num_base_bdevs_operational": 4, 00:14:38.976 "base_bdevs_list": [ 00:14:38.976 { 00:14:38.976 "name": "pt1", 00:14:38.976 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:38.976 "is_configured": true, 00:14:38.976 "data_offset": 2048, 00:14:38.976 "data_size": 63488 00:14:38.976 }, 00:14:38.976 { 00:14:38.976 "name": "pt2", 00:14:38.976 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:38.976 "is_configured": true, 00:14:38.976 "data_offset": 2048, 00:14:38.976 "data_size": 63488 00:14:38.976 }, 00:14:38.976 { 00:14:38.976 "name": "pt3", 00:14:38.976 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:38.976 "is_configured": true, 00:14:38.976 "data_offset": 2048, 00:14:38.976 "data_size": 63488 00:14:38.976 }, 00:14:38.976 { 00:14:38.976 "name": "pt4", 00:14:38.976 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:38.976 "is_configured": true, 00:14:38.976 "data_offset": 2048, 00:14:38.976 "data_size": 63488 00:14:38.976 } 00:14:38.976 ] 00:14:38.976 } 00:14:38.976 } 00:14:38.976 }' 00:14:38.976 23:32:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:38.976 23:32:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:14:38.976 pt2 00:14:38.976 pt3 00:14:38.976 pt4' 00:14:38.976 23:32:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:38.976 23:32:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:38.976 23:32:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:38.976 23:32:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:38.976 23:32:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:14:38.976 23:32:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:38.976 23:32:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:38.976 23:32:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:38.976 23:32:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:38.976 23:32:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:38.976 23:32:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:38.976 23:32:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:38.976 23:32:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:14:38.976 23:32:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:38.976 23:32:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:38.976 23:32:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:38.976 23:32:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:38.976 23:32:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:38.976 23:32:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:38.976 23:32:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:14:38.976 23:32:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:38.976 23:32:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:38.976 23:32:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:38.976 23:32:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:38.976 23:32:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:38.976 23:32:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:38.976 23:32:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:38.976 23:32:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:14:38.976 23:32:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:38.976 23:32:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:38.976 23:32:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:39.237 23:32:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:39.237 23:32:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:39.237 23:32:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:39.237 23:32:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:39.237 23:32:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:39.237 23:32:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:39.237 23:32:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:14:39.237 [2024-09-30 23:32:18.845472] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:39.237 23:32:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:39.237 23:32:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=36fe2ebe-eafe-4931-9a60-268d657ccdfb 00:14:39.237 23:32:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 36fe2ebe-eafe-4931-9a60-268d657ccdfb ']' 00:14:39.237 23:32:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:39.237 23:32:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:39.237 23:32:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:39.237 [2024-09-30 23:32:18.893213] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:39.237 [2024-09-30 23:32:18.893284] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:39.237 [2024-09-30 23:32:18.893375] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:39.237 [2024-09-30 23:32:18.893501] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:39.237 [2024-09-30 23:32:18.893553] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:14:39.237 23:32:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:39.237 23:32:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:39.237 23:32:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:39.237 23:32:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:39.237 23:32:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:14:39.237 23:32:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:39.237 23:32:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:14:39.237 23:32:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:14:39.237 23:32:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:14:39.237 23:32:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:14:39.237 23:32:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:39.237 23:32:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:39.237 23:32:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:39.237 23:32:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:14:39.237 23:32:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:14:39.237 23:32:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:39.237 23:32:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:39.237 23:32:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:39.237 23:32:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:14:39.237 23:32:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:14:39.237 23:32:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:39.237 23:32:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:39.237 23:32:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:39.237 23:32:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:14:39.238 23:32:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:14:39.238 23:32:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:39.238 23:32:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:39.238 23:32:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:39.238 23:32:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:14:39.238 23:32:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:39.238 23:32:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:14:39.238 23:32:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:39.238 23:32:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:39.238 23:32:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:14:39.238 23:32:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:14:39.238 23:32:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:14:39.238 23:32:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:14:39.238 23:32:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:14:39.238 23:32:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:39.238 23:32:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:14:39.238 23:32:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:39.238 23:32:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:14:39.238 23:32:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:39.238 23:32:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:39.238 [2024-09-30 23:32:19.045045] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:14:39.238 [2024-09-30 23:32:19.046891] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:14:39.238 [2024-09-30 23:32:19.046986] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:14:39.238 [2024-09-30 23:32:19.047031] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:14:39.238 [2024-09-30 23:32:19.047118] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:14:39.238 [2024-09-30 23:32:19.047219] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:14:39.238 [2024-09-30 23:32:19.047270] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:14:39.238 [2024-09-30 23:32:19.047350] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:14:39.238 [2024-09-30 23:32:19.047403] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:39.238 [2024-09-30 23:32:19.047435] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state configuring 00:14:39.238 request: 00:14:39.238 { 00:14:39.238 "name": "raid_bdev1", 00:14:39.238 "raid_level": "raid5f", 00:14:39.238 "base_bdevs": [ 00:14:39.238 "malloc1", 00:14:39.238 "malloc2", 00:14:39.238 "malloc3", 00:14:39.238 "malloc4" 00:14:39.238 ], 00:14:39.238 "strip_size_kb": 64, 00:14:39.238 "superblock": false, 00:14:39.238 "method": "bdev_raid_create", 00:14:39.238 "req_id": 1 00:14:39.238 } 00:14:39.238 Got JSON-RPC error response 00:14:39.238 response: 00:14:39.238 { 00:14:39.238 "code": -17, 00:14:39.238 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:14:39.238 } 00:14:39.238 23:32:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:14:39.238 23:32:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:14:39.238 23:32:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:39.238 23:32:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:39.238 23:32:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:39.238 23:32:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:39.238 23:32:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:14:39.238 23:32:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:39.238 23:32:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:39.238 23:32:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:39.498 23:32:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:14:39.498 23:32:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:14:39.498 23:32:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:39.498 23:32:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:39.498 23:32:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:39.498 [2024-09-30 23:32:19.108902] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:39.498 [2024-09-30 23:32:19.108998] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:39.498 [2024-09-30 23:32:19.109032] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:14:39.498 [2024-09-30 23:32:19.109058] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:39.498 [2024-09-30 23:32:19.111078] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:39.498 [2024-09-30 23:32:19.111139] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:39.498 [2024-09-30 23:32:19.111238] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:14:39.498 [2024-09-30 23:32:19.111294] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:14:39.498 pt1 00:14:39.498 23:32:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:39.498 23:32:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:14:39.498 23:32:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:39.498 23:32:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:39.498 23:32:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:39.498 23:32:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:39.498 23:32:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:39.498 23:32:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:39.498 23:32:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:39.498 23:32:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:39.498 23:32:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:39.498 23:32:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:39.498 23:32:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:39.498 23:32:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:39.498 23:32:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:39.498 23:32:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:39.498 23:32:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:39.498 "name": "raid_bdev1", 00:14:39.498 "uuid": "36fe2ebe-eafe-4931-9a60-268d657ccdfb", 00:14:39.498 "strip_size_kb": 64, 00:14:39.498 "state": "configuring", 00:14:39.498 "raid_level": "raid5f", 00:14:39.498 "superblock": true, 00:14:39.498 "num_base_bdevs": 4, 00:14:39.498 "num_base_bdevs_discovered": 1, 00:14:39.498 "num_base_bdevs_operational": 4, 00:14:39.498 "base_bdevs_list": [ 00:14:39.498 { 00:14:39.498 "name": "pt1", 00:14:39.498 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:39.498 "is_configured": true, 00:14:39.498 "data_offset": 2048, 00:14:39.498 "data_size": 63488 00:14:39.498 }, 00:14:39.498 { 00:14:39.498 "name": null, 00:14:39.498 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:39.498 "is_configured": false, 00:14:39.498 "data_offset": 2048, 00:14:39.498 "data_size": 63488 00:14:39.498 }, 00:14:39.498 { 00:14:39.498 "name": null, 00:14:39.498 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:39.498 "is_configured": false, 00:14:39.498 "data_offset": 2048, 00:14:39.498 "data_size": 63488 00:14:39.498 }, 00:14:39.498 { 00:14:39.498 "name": null, 00:14:39.498 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:39.498 "is_configured": false, 00:14:39.498 "data_offset": 2048, 00:14:39.498 "data_size": 63488 00:14:39.498 } 00:14:39.498 ] 00:14:39.498 }' 00:14:39.498 23:32:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:39.498 23:32:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:39.758 23:32:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:14:39.758 23:32:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:39.758 23:32:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:39.758 23:32:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:39.758 [2024-09-30 23:32:19.516224] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:39.758 [2024-09-30 23:32:19.516335] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:39.758 [2024-09-30 23:32:19.516369] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:14:39.758 [2024-09-30 23:32:19.516396] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:39.758 [2024-09-30 23:32:19.516740] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:39.758 [2024-09-30 23:32:19.516792] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:39.758 [2024-09-30 23:32:19.516889] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:14:39.758 [2024-09-30 23:32:19.516935] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:39.758 pt2 00:14:39.758 23:32:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:39.758 23:32:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:14:39.758 23:32:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:39.758 23:32:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:39.758 [2024-09-30 23:32:19.528217] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:14:39.758 23:32:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:39.758 23:32:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:14:39.758 23:32:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:39.758 23:32:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:39.758 23:32:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:39.758 23:32:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:39.758 23:32:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:39.758 23:32:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:39.758 23:32:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:39.758 23:32:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:39.758 23:32:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:39.758 23:32:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:39.758 23:32:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:39.758 23:32:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:39.758 23:32:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:39.758 23:32:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:39.758 23:32:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:39.758 "name": "raid_bdev1", 00:14:39.758 "uuid": "36fe2ebe-eafe-4931-9a60-268d657ccdfb", 00:14:39.758 "strip_size_kb": 64, 00:14:39.758 "state": "configuring", 00:14:39.758 "raid_level": "raid5f", 00:14:39.758 "superblock": true, 00:14:39.758 "num_base_bdevs": 4, 00:14:39.758 "num_base_bdevs_discovered": 1, 00:14:39.758 "num_base_bdevs_operational": 4, 00:14:39.758 "base_bdevs_list": [ 00:14:39.758 { 00:14:39.758 "name": "pt1", 00:14:39.758 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:39.758 "is_configured": true, 00:14:39.758 "data_offset": 2048, 00:14:39.758 "data_size": 63488 00:14:39.758 }, 00:14:39.758 { 00:14:39.758 "name": null, 00:14:39.758 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:39.758 "is_configured": false, 00:14:39.758 "data_offset": 0, 00:14:39.758 "data_size": 63488 00:14:39.758 }, 00:14:39.758 { 00:14:39.758 "name": null, 00:14:39.758 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:39.758 "is_configured": false, 00:14:39.758 "data_offset": 2048, 00:14:39.758 "data_size": 63488 00:14:39.758 }, 00:14:39.758 { 00:14:39.758 "name": null, 00:14:39.758 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:39.758 "is_configured": false, 00:14:39.758 "data_offset": 2048, 00:14:39.758 "data_size": 63488 00:14:39.758 } 00:14:39.758 ] 00:14:39.758 }' 00:14:39.758 23:32:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:39.758 23:32:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:40.329 23:32:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:14:40.329 23:32:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:14:40.329 23:32:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:40.329 23:32:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:40.329 23:32:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:40.329 [2024-09-30 23:32:19.995460] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:40.329 [2024-09-30 23:32:19.995564] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:40.329 [2024-09-30 23:32:19.995594] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:14:40.329 [2024-09-30 23:32:19.995623] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:40.329 [2024-09-30 23:32:19.995976] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:40.329 [2024-09-30 23:32:19.996038] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:40.329 [2024-09-30 23:32:19.996126] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:14:40.329 [2024-09-30 23:32:19.996173] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:40.329 pt2 00:14:40.329 23:32:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:40.329 23:32:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:14:40.329 23:32:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:14:40.329 23:32:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:14:40.329 23:32:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:40.329 23:32:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:40.329 [2024-09-30 23:32:20.007397] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:14:40.329 [2024-09-30 23:32:20.007486] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:40.329 [2024-09-30 23:32:20.007534] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:14:40.329 [2024-09-30 23:32:20.007562] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:40.329 [2024-09-30 23:32:20.007889] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:40.329 [2024-09-30 23:32:20.007947] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:14:40.329 [2024-09-30 23:32:20.008023] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:14:40.329 [2024-09-30 23:32:20.008067] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:14:40.329 pt3 00:14:40.329 23:32:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:40.329 23:32:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:14:40.329 23:32:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:14:40.329 23:32:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:14:40.329 23:32:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:40.329 23:32:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:40.329 [2024-09-30 23:32:20.019388] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:14:40.329 [2024-09-30 23:32:20.019493] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:40.329 [2024-09-30 23:32:20.019524] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:14:40.329 [2024-09-30 23:32:20.019551] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:40.329 [2024-09-30 23:32:20.019842] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:40.329 [2024-09-30 23:32:20.019911] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:14:40.329 [2024-09-30 23:32:20.019984] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:14:40.329 [2024-09-30 23:32:20.020033] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:14:40.329 [2024-09-30 23:32:20.020144] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:14:40.329 [2024-09-30 23:32:20.020182] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:14:40.329 [2024-09-30 23:32:20.020412] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:14:40.329 [2024-09-30 23:32:20.020911] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:14:40.329 [2024-09-30 23:32:20.020957] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006980 00:14:40.329 [2024-09-30 23:32:20.021086] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:40.329 pt4 00:14:40.329 23:32:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:40.329 23:32:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:14:40.329 23:32:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:14:40.329 23:32:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:14:40.329 23:32:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:40.329 23:32:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:40.329 23:32:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:40.329 23:32:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:40.329 23:32:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:40.329 23:32:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:40.329 23:32:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:40.329 23:32:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:40.329 23:32:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:40.329 23:32:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:40.329 23:32:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:40.329 23:32:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:40.329 23:32:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:40.329 23:32:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:40.329 23:32:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:40.329 "name": "raid_bdev1", 00:14:40.329 "uuid": "36fe2ebe-eafe-4931-9a60-268d657ccdfb", 00:14:40.329 "strip_size_kb": 64, 00:14:40.329 "state": "online", 00:14:40.329 "raid_level": "raid5f", 00:14:40.329 "superblock": true, 00:14:40.329 "num_base_bdevs": 4, 00:14:40.329 "num_base_bdevs_discovered": 4, 00:14:40.329 "num_base_bdevs_operational": 4, 00:14:40.329 "base_bdevs_list": [ 00:14:40.329 { 00:14:40.329 "name": "pt1", 00:14:40.329 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:40.329 "is_configured": true, 00:14:40.329 "data_offset": 2048, 00:14:40.329 "data_size": 63488 00:14:40.329 }, 00:14:40.329 { 00:14:40.329 "name": "pt2", 00:14:40.329 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:40.329 "is_configured": true, 00:14:40.329 "data_offset": 2048, 00:14:40.329 "data_size": 63488 00:14:40.329 }, 00:14:40.329 { 00:14:40.329 "name": "pt3", 00:14:40.329 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:40.329 "is_configured": true, 00:14:40.329 "data_offset": 2048, 00:14:40.329 "data_size": 63488 00:14:40.329 }, 00:14:40.329 { 00:14:40.329 "name": "pt4", 00:14:40.329 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:40.329 "is_configured": true, 00:14:40.329 "data_offset": 2048, 00:14:40.329 "data_size": 63488 00:14:40.329 } 00:14:40.329 ] 00:14:40.329 }' 00:14:40.329 23:32:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:40.329 23:32:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:40.900 23:32:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:14:40.900 23:32:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:14:40.900 23:32:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:40.900 23:32:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:40.900 23:32:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:14:40.900 23:32:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:40.900 23:32:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:40.900 23:32:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:40.900 23:32:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:40.900 23:32:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:40.900 [2024-09-30 23:32:20.458804] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:40.900 23:32:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:40.900 23:32:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:40.900 "name": "raid_bdev1", 00:14:40.900 "aliases": [ 00:14:40.900 "36fe2ebe-eafe-4931-9a60-268d657ccdfb" 00:14:40.900 ], 00:14:40.900 "product_name": "Raid Volume", 00:14:40.900 "block_size": 512, 00:14:40.900 "num_blocks": 190464, 00:14:40.900 "uuid": "36fe2ebe-eafe-4931-9a60-268d657ccdfb", 00:14:40.900 "assigned_rate_limits": { 00:14:40.900 "rw_ios_per_sec": 0, 00:14:40.900 "rw_mbytes_per_sec": 0, 00:14:40.900 "r_mbytes_per_sec": 0, 00:14:40.900 "w_mbytes_per_sec": 0 00:14:40.900 }, 00:14:40.900 "claimed": false, 00:14:40.900 "zoned": false, 00:14:40.900 "supported_io_types": { 00:14:40.900 "read": true, 00:14:40.900 "write": true, 00:14:40.900 "unmap": false, 00:14:40.900 "flush": false, 00:14:40.900 "reset": true, 00:14:40.900 "nvme_admin": false, 00:14:40.900 "nvme_io": false, 00:14:40.900 "nvme_io_md": false, 00:14:40.900 "write_zeroes": true, 00:14:40.900 "zcopy": false, 00:14:40.900 "get_zone_info": false, 00:14:40.900 "zone_management": false, 00:14:40.900 "zone_append": false, 00:14:40.900 "compare": false, 00:14:40.900 "compare_and_write": false, 00:14:40.900 "abort": false, 00:14:40.900 "seek_hole": false, 00:14:40.900 "seek_data": false, 00:14:40.900 "copy": false, 00:14:40.900 "nvme_iov_md": false 00:14:40.900 }, 00:14:40.900 "driver_specific": { 00:14:40.900 "raid": { 00:14:40.900 "uuid": "36fe2ebe-eafe-4931-9a60-268d657ccdfb", 00:14:40.900 "strip_size_kb": 64, 00:14:40.900 "state": "online", 00:14:40.900 "raid_level": "raid5f", 00:14:40.900 "superblock": true, 00:14:40.900 "num_base_bdevs": 4, 00:14:40.900 "num_base_bdevs_discovered": 4, 00:14:40.900 "num_base_bdevs_operational": 4, 00:14:40.900 "base_bdevs_list": [ 00:14:40.900 { 00:14:40.900 "name": "pt1", 00:14:40.900 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:40.900 "is_configured": true, 00:14:40.900 "data_offset": 2048, 00:14:40.900 "data_size": 63488 00:14:40.900 }, 00:14:40.900 { 00:14:40.900 "name": "pt2", 00:14:40.900 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:40.900 "is_configured": true, 00:14:40.900 "data_offset": 2048, 00:14:40.900 "data_size": 63488 00:14:40.900 }, 00:14:40.900 { 00:14:40.900 "name": "pt3", 00:14:40.900 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:40.900 "is_configured": true, 00:14:40.900 "data_offset": 2048, 00:14:40.900 "data_size": 63488 00:14:40.900 }, 00:14:40.900 { 00:14:40.900 "name": "pt4", 00:14:40.900 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:40.900 "is_configured": true, 00:14:40.900 "data_offset": 2048, 00:14:40.900 "data_size": 63488 00:14:40.900 } 00:14:40.900 ] 00:14:40.900 } 00:14:40.900 } 00:14:40.900 }' 00:14:40.900 23:32:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:40.900 23:32:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:14:40.900 pt2 00:14:40.900 pt3 00:14:40.900 pt4' 00:14:40.900 23:32:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:40.900 23:32:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:40.900 23:32:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:40.900 23:32:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:14:40.900 23:32:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:40.900 23:32:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:40.900 23:32:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:40.901 23:32:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:40.901 23:32:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:40.901 23:32:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:40.901 23:32:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:40.901 23:32:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:14:40.901 23:32:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:40.901 23:32:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:40.901 23:32:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:40.901 23:32:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:40.901 23:32:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:40.901 23:32:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:40.901 23:32:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:40.901 23:32:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:40.901 23:32:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:14:40.901 23:32:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:40.901 23:32:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:40.901 23:32:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:40.901 23:32:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:40.901 23:32:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:40.901 23:32:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:40.901 23:32:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:14:40.901 23:32:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:40.901 23:32:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:40.901 23:32:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:40.901 23:32:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:41.161 23:32:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:41.161 23:32:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:41.161 23:32:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:41.161 23:32:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:41.161 23:32:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:41.161 23:32:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:14:41.161 [2024-09-30 23:32:20.774238] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:41.161 23:32:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:41.161 23:32:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 36fe2ebe-eafe-4931-9a60-268d657ccdfb '!=' 36fe2ebe-eafe-4931-9a60-268d657ccdfb ']' 00:14:41.161 23:32:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid5f 00:14:41.161 23:32:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:14:41.161 23:32:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:14:41.161 23:32:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:14:41.161 23:32:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:41.161 23:32:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:41.161 [2024-09-30 23:32:20.818030] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:14:41.161 23:32:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:41.161 23:32:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:14:41.161 23:32:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:41.161 23:32:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:41.161 23:32:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:41.161 23:32:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:41.161 23:32:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:41.161 23:32:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:41.161 23:32:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:41.161 23:32:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:41.161 23:32:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:41.161 23:32:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:41.161 23:32:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:41.161 23:32:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:41.161 23:32:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:41.161 23:32:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:41.161 23:32:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:41.161 "name": "raid_bdev1", 00:14:41.161 "uuid": "36fe2ebe-eafe-4931-9a60-268d657ccdfb", 00:14:41.161 "strip_size_kb": 64, 00:14:41.161 "state": "online", 00:14:41.161 "raid_level": "raid5f", 00:14:41.161 "superblock": true, 00:14:41.161 "num_base_bdevs": 4, 00:14:41.161 "num_base_bdevs_discovered": 3, 00:14:41.161 "num_base_bdevs_operational": 3, 00:14:41.161 "base_bdevs_list": [ 00:14:41.161 { 00:14:41.161 "name": null, 00:14:41.161 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:41.161 "is_configured": false, 00:14:41.161 "data_offset": 0, 00:14:41.161 "data_size": 63488 00:14:41.161 }, 00:14:41.161 { 00:14:41.161 "name": "pt2", 00:14:41.161 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:41.161 "is_configured": true, 00:14:41.161 "data_offset": 2048, 00:14:41.161 "data_size": 63488 00:14:41.161 }, 00:14:41.161 { 00:14:41.161 "name": "pt3", 00:14:41.161 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:41.161 "is_configured": true, 00:14:41.161 "data_offset": 2048, 00:14:41.161 "data_size": 63488 00:14:41.161 }, 00:14:41.161 { 00:14:41.161 "name": "pt4", 00:14:41.161 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:41.161 "is_configured": true, 00:14:41.161 "data_offset": 2048, 00:14:41.161 "data_size": 63488 00:14:41.161 } 00:14:41.161 ] 00:14:41.161 }' 00:14:41.161 23:32:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:41.161 23:32:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:41.420 23:32:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:41.420 23:32:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:41.420 23:32:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:41.420 [2024-09-30 23:32:21.213339] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:41.420 [2024-09-30 23:32:21.213415] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:41.420 [2024-09-30 23:32:21.213513] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:41.420 [2024-09-30 23:32:21.213597] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:41.420 [2024-09-30 23:32:21.213649] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name raid_bdev1, state offline 00:14:41.420 23:32:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:41.420 23:32:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:41.420 23:32:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:41.420 23:32:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:41.420 23:32:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:14:41.420 23:32:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:41.420 23:32:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:14:41.421 23:32:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:14:41.421 23:32:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:14:41.421 23:32:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:14:41.421 23:32:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:14:41.421 23:32:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:41.421 23:32:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:41.680 23:32:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:41.680 23:32:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:14:41.680 23:32:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:14:41.680 23:32:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:14:41.680 23:32:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:41.680 23:32:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:41.680 23:32:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:41.680 23:32:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:14:41.680 23:32:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:14:41.680 23:32:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt4 00:14:41.680 23:32:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:41.680 23:32:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:41.680 23:32:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:41.680 23:32:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:14:41.680 23:32:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:14:41.680 23:32:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:14:41.680 23:32:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:14:41.680 23:32:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:41.680 23:32:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:41.680 23:32:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:41.680 [2024-09-30 23:32:21.313145] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:41.680 [2024-09-30 23:32:21.313259] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:41.680 [2024-09-30 23:32:21.313292] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:14:41.680 [2024-09-30 23:32:21.313320] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:41.680 [2024-09-30 23:32:21.315378] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:41.680 [2024-09-30 23:32:21.315446] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:41.680 [2024-09-30 23:32:21.315549] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:14:41.680 [2024-09-30 23:32:21.315599] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:41.680 pt2 00:14:41.680 23:32:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:41.680 23:32:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:14:41.680 23:32:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:41.680 23:32:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:41.680 23:32:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:41.680 23:32:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:41.680 23:32:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:41.680 23:32:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:41.680 23:32:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:41.680 23:32:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:41.680 23:32:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:41.680 23:32:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:41.680 23:32:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:41.680 23:32:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:41.680 23:32:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:41.680 23:32:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:41.680 23:32:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:41.680 "name": "raid_bdev1", 00:14:41.680 "uuid": "36fe2ebe-eafe-4931-9a60-268d657ccdfb", 00:14:41.680 "strip_size_kb": 64, 00:14:41.680 "state": "configuring", 00:14:41.680 "raid_level": "raid5f", 00:14:41.680 "superblock": true, 00:14:41.680 "num_base_bdevs": 4, 00:14:41.680 "num_base_bdevs_discovered": 1, 00:14:41.680 "num_base_bdevs_operational": 3, 00:14:41.680 "base_bdevs_list": [ 00:14:41.680 { 00:14:41.680 "name": null, 00:14:41.680 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:41.680 "is_configured": false, 00:14:41.680 "data_offset": 2048, 00:14:41.680 "data_size": 63488 00:14:41.680 }, 00:14:41.680 { 00:14:41.680 "name": "pt2", 00:14:41.680 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:41.680 "is_configured": true, 00:14:41.680 "data_offset": 2048, 00:14:41.680 "data_size": 63488 00:14:41.680 }, 00:14:41.680 { 00:14:41.680 "name": null, 00:14:41.680 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:41.680 "is_configured": false, 00:14:41.680 "data_offset": 2048, 00:14:41.680 "data_size": 63488 00:14:41.680 }, 00:14:41.680 { 00:14:41.680 "name": null, 00:14:41.680 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:41.680 "is_configured": false, 00:14:41.680 "data_offset": 2048, 00:14:41.680 "data_size": 63488 00:14:41.680 } 00:14:41.680 ] 00:14:41.680 }' 00:14:41.680 23:32:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:41.680 23:32:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:41.939 23:32:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:14:41.939 23:32:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:14:41.939 23:32:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:14:41.939 23:32:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:41.939 23:32:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:41.939 [2024-09-30 23:32:21.716488] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:14:41.939 [2024-09-30 23:32:21.716587] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:41.939 [2024-09-30 23:32:21.716621] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:14:41.939 [2024-09-30 23:32:21.716680] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:41.939 [2024-09-30 23:32:21.717090] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:41.939 [2024-09-30 23:32:21.717156] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:14:41.939 [2024-09-30 23:32:21.717252] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:14:41.939 [2024-09-30 23:32:21.717310] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:14:41.939 pt3 00:14:41.939 23:32:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:41.939 23:32:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:14:41.939 23:32:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:41.939 23:32:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:41.939 23:32:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:41.939 23:32:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:41.939 23:32:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:41.939 23:32:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:41.939 23:32:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:41.939 23:32:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:41.939 23:32:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:41.939 23:32:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:41.939 23:32:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:41.939 23:32:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:41.939 23:32:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:41.939 23:32:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:41.939 23:32:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:41.939 "name": "raid_bdev1", 00:14:41.939 "uuid": "36fe2ebe-eafe-4931-9a60-268d657ccdfb", 00:14:41.939 "strip_size_kb": 64, 00:14:41.939 "state": "configuring", 00:14:41.939 "raid_level": "raid5f", 00:14:41.939 "superblock": true, 00:14:41.939 "num_base_bdevs": 4, 00:14:41.939 "num_base_bdevs_discovered": 2, 00:14:41.939 "num_base_bdevs_operational": 3, 00:14:41.939 "base_bdevs_list": [ 00:14:41.939 { 00:14:41.939 "name": null, 00:14:41.939 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:41.939 "is_configured": false, 00:14:41.939 "data_offset": 2048, 00:14:41.939 "data_size": 63488 00:14:41.939 }, 00:14:41.939 { 00:14:41.939 "name": "pt2", 00:14:41.939 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:41.939 "is_configured": true, 00:14:41.939 "data_offset": 2048, 00:14:41.939 "data_size": 63488 00:14:41.939 }, 00:14:41.939 { 00:14:41.939 "name": "pt3", 00:14:41.939 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:41.939 "is_configured": true, 00:14:41.939 "data_offset": 2048, 00:14:41.939 "data_size": 63488 00:14:41.939 }, 00:14:41.939 { 00:14:41.939 "name": null, 00:14:41.939 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:41.939 "is_configured": false, 00:14:41.939 "data_offset": 2048, 00:14:41.939 "data_size": 63488 00:14:41.939 } 00:14:41.939 ] 00:14:41.939 }' 00:14:41.939 23:32:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:41.939 23:32:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:42.509 23:32:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:14:42.509 23:32:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:14:42.509 23:32:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@519 -- # i=3 00:14:42.509 23:32:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:14:42.509 23:32:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:42.509 23:32:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:42.509 [2024-09-30 23:32:22.095808] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:14:42.509 [2024-09-30 23:32:22.095937] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:42.509 [2024-09-30 23:32:22.095975] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:14:42.509 [2024-09-30 23:32:22.096005] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:42.509 [2024-09-30 23:32:22.096330] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:42.509 [2024-09-30 23:32:22.096395] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:14:42.509 [2024-09-30 23:32:22.096486] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:14:42.509 [2024-09-30 23:32:22.096533] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:14:42.509 [2024-09-30 23:32:22.096647] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:14:42.509 [2024-09-30 23:32:22.096685] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:14:42.510 [2024-09-30 23:32:22.096923] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:14:42.510 [2024-09-30 23:32:22.097458] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:14:42.510 [2024-09-30 23:32:22.097504] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006d00 00:14:42.510 [2024-09-30 23:32:22.097759] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:42.510 pt4 00:14:42.510 23:32:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:42.510 23:32:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:14:42.510 23:32:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:42.510 23:32:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:42.510 23:32:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:42.510 23:32:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:42.510 23:32:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:42.510 23:32:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:42.510 23:32:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:42.510 23:32:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:42.510 23:32:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:42.510 23:32:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:42.510 23:32:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:42.510 23:32:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:42.510 23:32:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:42.510 23:32:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:42.510 23:32:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:42.510 "name": "raid_bdev1", 00:14:42.510 "uuid": "36fe2ebe-eafe-4931-9a60-268d657ccdfb", 00:14:42.510 "strip_size_kb": 64, 00:14:42.510 "state": "online", 00:14:42.510 "raid_level": "raid5f", 00:14:42.510 "superblock": true, 00:14:42.510 "num_base_bdevs": 4, 00:14:42.510 "num_base_bdevs_discovered": 3, 00:14:42.510 "num_base_bdevs_operational": 3, 00:14:42.510 "base_bdevs_list": [ 00:14:42.510 { 00:14:42.510 "name": null, 00:14:42.510 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:42.510 "is_configured": false, 00:14:42.510 "data_offset": 2048, 00:14:42.510 "data_size": 63488 00:14:42.510 }, 00:14:42.510 { 00:14:42.510 "name": "pt2", 00:14:42.510 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:42.510 "is_configured": true, 00:14:42.510 "data_offset": 2048, 00:14:42.510 "data_size": 63488 00:14:42.510 }, 00:14:42.510 { 00:14:42.510 "name": "pt3", 00:14:42.510 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:42.510 "is_configured": true, 00:14:42.510 "data_offset": 2048, 00:14:42.510 "data_size": 63488 00:14:42.510 }, 00:14:42.510 { 00:14:42.510 "name": "pt4", 00:14:42.510 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:42.510 "is_configured": true, 00:14:42.510 "data_offset": 2048, 00:14:42.510 "data_size": 63488 00:14:42.510 } 00:14:42.510 ] 00:14:42.510 }' 00:14:42.510 23:32:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:42.510 23:32:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:42.770 23:32:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:42.770 23:32:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:42.770 23:32:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:42.770 [2024-09-30 23:32:22.455222] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:42.770 [2024-09-30 23:32:22.455301] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:42.770 [2024-09-30 23:32:22.455414] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:42.770 [2024-09-30 23:32:22.455504] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:42.770 [2024-09-30 23:32:22.455552] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name raid_bdev1, state offline 00:14:42.770 23:32:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:42.770 23:32:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:14:42.770 23:32:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:42.770 23:32:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:42.770 23:32:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:42.770 23:32:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:42.770 23:32:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:14:42.770 23:32:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:14:42.770 23:32:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 4 -gt 2 ']' 00:14:42.771 23:32:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@534 -- # i=3 00:14:42.771 23:32:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt4 00:14:42.771 23:32:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:42.771 23:32:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:42.771 23:32:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:42.771 23:32:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:42.771 23:32:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:42.771 23:32:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:42.771 [2024-09-30 23:32:22.515121] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:42.771 [2024-09-30 23:32:22.515216] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:42.771 [2024-09-30 23:32:22.515253] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:14:42.771 [2024-09-30 23:32:22.515280] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:42.771 [2024-09-30 23:32:22.517463] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:42.771 [2024-09-30 23:32:22.517532] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:42.771 [2024-09-30 23:32:22.517618] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:14:42.771 [2024-09-30 23:32:22.517705] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:14:42.771 [2024-09-30 23:32:22.517834] bdev_raid.c:3675:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:14:42.771 [2024-09-30 23:32:22.517906] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:42.771 [2024-09-30 23:32:22.517950] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007080 name raid_bdev1, state configuring 00:14:42.771 [2024-09-30 23:32:22.518011] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:42.771 [2024-09-30 23:32:22.518164] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:14:42.771 pt1 00:14:42.771 23:32:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:42.771 23:32:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 4 -gt 2 ']' 00:14:42.771 23:32:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:14:42.771 23:32:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:42.771 23:32:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:42.771 23:32:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:42.771 23:32:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:42.771 23:32:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:42.771 23:32:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:42.771 23:32:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:42.771 23:32:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:42.771 23:32:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:42.771 23:32:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:42.771 23:32:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:42.771 23:32:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:42.771 23:32:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:42.771 23:32:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:42.771 23:32:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:42.771 "name": "raid_bdev1", 00:14:42.771 "uuid": "36fe2ebe-eafe-4931-9a60-268d657ccdfb", 00:14:42.771 "strip_size_kb": 64, 00:14:42.771 "state": "configuring", 00:14:42.771 "raid_level": "raid5f", 00:14:42.771 "superblock": true, 00:14:42.771 "num_base_bdevs": 4, 00:14:42.771 "num_base_bdevs_discovered": 2, 00:14:42.771 "num_base_bdevs_operational": 3, 00:14:42.771 "base_bdevs_list": [ 00:14:42.771 { 00:14:42.771 "name": null, 00:14:42.771 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:42.771 "is_configured": false, 00:14:42.771 "data_offset": 2048, 00:14:42.771 "data_size": 63488 00:14:42.771 }, 00:14:42.771 { 00:14:42.771 "name": "pt2", 00:14:42.771 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:42.771 "is_configured": true, 00:14:42.771 "data_offset": 2048, 00:14:42.771 "data_size": 63488 00:14:42.771 }, 00:14:42.771 { 00:14:42.771 "name": "pt3", 00:14:42.771 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:42.771 "is_configured": true, 00:14:42.771 "data_offset": 2048, 00:14:42.771 "data_size": 63488 00:14:42.771 }, 00:14:42.771 { 00:14:42.771 "name": null, 00:14:42.771 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:42.771 "is_configured": false, 00:14:42.771 "data_offset": 2048, 00:14:42.771 "data_size": 63488 00:14:42.771 } 00:14:42.771 ] 00:14:42.771 }' 00:14:42.771 23:32:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:42.771 23:32:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:43.341 23:32:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:14:43.341 23:32:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:43.341 23:32:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:43.341 23:32:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:14:43.341 23:32:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:43.341 23:32:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:14:43.341 23:32:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:14:43.341 23:32:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:43.341 23:32:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:43.341 [2024-09-30 23:32:23.030218] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:14:43.341 [2024-09-30 23:32:23.030316] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:43.341 [2024-09-30 23:32:23.030366] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:14:43.341 [2024-09-30 23:32:23.030395] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:43.341 [2024-09-30 23:32:23.030763] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:43.341 [2024-09-30 23:32:23.030819] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:14:43.341 [2024-09-30 23:32:23.030911] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:14:43.341 [2024-09-30 23:32:23.030962] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:14:43.341 [2024-09-30 23:32:23.031083] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007400 00:14:43.341 [2024-09-30 23:32:23.031124] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:14:43.341 [2024-09-30 23:32:23.031366] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:14:43.341 [2024-09-30 23:32:23.031951] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007400 00:14:43.341 [2024-09-30 23:32:23.032002] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007400 00:14:43.341 [2024-09-30 23:32:23.032216] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:43.341 pt4 00:14:43.341 23:32:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:43.341 23:32:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:14:43.341 23:32:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:43.341 23:32:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:43.341 23:32:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:43.341 23:32:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:43.341 23:32:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:43.341 23:32:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:43.341 23:32:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:43.341 23:32:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:43.341 23:32:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:43.341 23:32:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:43.341 23:32:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:43.341 23:32:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:43.341 23:32:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:43.341 23:32:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:43.341 23:32:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:43.341 "name": "raid_bdev1", 00:14:43.341 "uuid": "36fe2ebe-eafe-4931-9a60-268d657ccdfb", 00:14:43.341 "strip_size_kb": 64, 00:14:43.341 "state": "online", 00:14:43.341 "raid_level": "raid5f", 00:14:43.341 "superblock": true, 00:14:43.341 "num_base_bdevs": 4, 00:14:43.341 "num_base_bdevs_discovered": 3, 00:14:43.341 "num_base_bdevs_operational": 3, 00:14:43.341 "base_bdevs_list": [ 00:14:43.341 { 00:14:43.341 "name": null, 00:14:43.341 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:43.341 "is_configured": false, 00:14:43.341 "data_offset": 2048, 00:14:43.341 "data_size": 63488 00:14:43.341 }, 00:14:43.341 { 00:14:43.341 "name": "pt2", 00:14:43.341 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:43.341 "is_configured": true, 00:14:43.341 "data_offset": 2048, 00:14:43.341 "data_size": 63488 00:14:43.341 }, 00:14:43.341 { 00:14:43.341 "name": "pt3", 00:14:43.341 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:43.341 "is_configured": true, 00:14:43.341 "data_offset": 2048, 00:14:43.341 "data_size": 63488 00:14:43.341 }, 00:14:43.341 { 00:14:43.341 "name": "pt4", 00:14:43.341 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:43.341 "is_configured": true, 00:14:43.341 "data_offset": 2048, 00:14:43.341 "data_size": 63488 00:14:43.341 } 00:14:43.341 ] 00:14:43.341 }' 00:14:43.341 23:32:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:43.341 23:32:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:43.911 23:32:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:14:43.911 23:32:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:14:43.911 23:32:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:43.911 23:32:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:43.911 23:32:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:43.911 23:32:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:14:43.912 23:32:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:43.912 23:32:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:43.912 23:32:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:43.912 23:32:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:14:43.912 [2024-09-30 23:32:23.509624] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:43.912 23:32:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:43.912 23:32:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 36fe2ebe-eafe-4931-9a60-268d657ccdfb '!=' 36fe2ebe-eafe-4931-9a60-268d657ccdfb ']' 00:14:43.912 23:32:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 94588 00:14:43.912 23:32:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 94588 ']' 00:14:43.912 23:32:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@954 -- # kill -0 94588 00:14:43.912 23:32:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@955 -- # uname 00:14:43.912 23:32:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:43.912 23:32:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 94588 00:14:43.912 23:32:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:43.912 23:32:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:43.912 23:32:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 94588' 00:14:43.912 killing process with pid 94588 00:14:43.912 23:32:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@969 -- # kill 94588 00:14:43.912 [2024-09-30 23:32:23.594390] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:43.912 [2024-09-30 23:32:23.594467] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:43.912 [2024-09-30 23:32:23.594541] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:43.912 [2024-09-30 23:32:23.594552] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name raid_bdev1, state offline 00:14:43.912 23:32:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@974 -- # wait 94588 00:14:43.912 [2024-09-30 23:32:23.637988] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:44.172 23:32:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:14:44.172 00:14:44.172 real 0m6.777s 00:14:44.172 user 0m11.370s 00:14:44.172 sys 0m1.416s 00:14:44.172 ************************************ 00:14:44.172 END TEST raid5f_superblock_test 00:14:44.172 ************************************ 00:14:44.172 23:32:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:44.172 23:32:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.172 23:32:23 bdev_raid -- bdev/bdev_raid.sh@989 -- # '[' true = true ']' 00:14:44.172 23:32:23 bdev_raid -- bdev/bdev_raid.sh@990 -- # run_test raid5f_rebuild_test raid_rebuild_test raid5f 4 false false true 00:14:44.172 23:32:23 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:14:44.172 23:32:23 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:44.172 23:32:23 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:44.172 ************************************ 00:14:44.172 START TEST raid5f_rebuild_test 00:14:44.172 ************************************ 00:14:44.172 23:32:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid5f 4 false false true 00:14:44.172 23:32:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:14:44.172 23:32:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:14:44.172 23:32:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:14:44.172 23:32:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:14:44.172 23:32:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:14:44.172 23:32:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:14:44.172 23:32:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:44.172 23:32:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:14:44.172 23:32:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:44.172 23:32:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:44.172 23:32:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:14:44.172 23:32:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:44.172 23:32:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:44.172 23:32:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:14:44.172 23:32:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:44.172 23:32:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:44.172 23:32:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:14:44.172 23:32:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:44.172 23:32:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:44.172 23:32:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:14:44.172 23:32:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:14:44.172 23:32:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:14:44.172 23:32:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:14:44.172 23:32:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:14:44.172 23:32:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:14:44.172 23:32:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:14:44.172 23:32:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:14:44.172 23:32:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:14:44.172 23:32:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:14:44.172 23:32:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:14:44.172 23:32:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:14:44.172 23:32:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=95059 00:14:44.172 23:32:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:14:44.172 23:32:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 95059 00:14:44.172 23:32:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@831 -- # '[' -z 95059 ']' 00:14:44.172 23:32:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:44.172 23:32:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:44.172 23:32:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:44.172 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:44.172 23:32:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:44.172 23:32:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.431 [2024-09-30 23:32:24.062122] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:14:44.431 [2024-09-30 23:32:24.062383] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95059 ] 00:14:44.431 I/O size of 3145728 is greater than zero copy threshold (65536). 00:14:44.431 Zero copy mechanism will not be used. 00:14:44.432 [2024-09-30 23:32:24.222672] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:44.432 [2024-09-30 23:32:24.268348] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:14:44.691 [2024-09-30 23:32:24.310607] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:44.691 [2024-09-30 23:32:24.310713] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:45.260 23:32:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:45.260 23:32:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@864 -- # return 0 00:14:45.260 23:32:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:45.261 23:32:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:14:45.261 23:32:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:45.261 23:32:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:45.261 BaseBdev1_malloc 00:14:45.261 23:32:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:45.261 23:32:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:14:45.261 23:32:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:45.261 23:32:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:45.261 [2024-09-30 23:32:24.896682] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:14:45.261 [2024-09-30 23:32:24.896826] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:45.261 [2024-09-30 23:32:24.896900] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:14:45.261 [2024-09-30 23:32:24.896938] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:45.261 [2024-09-30 23:32:24.899026] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:45.261 [2024-09-30 23:32:24.899092] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:45.261 BaseBdev1 00:14:45.261 23:32:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:45.261 23:32:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:45.261 23:32:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:14:45.261 23:32:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:45.261 23:32:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:45.261 BaseBdev2_malloc 00:14:45.261 23:32:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:45.261 23:32:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:14:45.261 23:32:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:45.261 23:32:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:45.261 [2024-09-30 23:32:24.934569] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:14:45.261 [2024-09-30 23:32:24.934666] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:45.261 [2024-09-30 23:32:24.934718] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:14:45.261 [2024-09-30 23:32:24.934746] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:45.261 [2024-09-30 23:32:24.936749] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:45.261 [2024-09-30 23:32:24.936785] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:45.261 BaseBdev2 00:14:45.261 23:32:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:45.261 23:32:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:45.261 23:32:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:14:45.261 23:32:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:45.261 23:32:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:45.261 BaseBdev3_malloc 00:14:45.261 23:32:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:45.261 23:32:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:14:45.261 23:32:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:45.261 23:32:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:45.261 [2024-09-30 23:32:24.962936] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:14:45.261 [2024-09-30 23:32:24.963029] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:45.261 [2024-09-30 23:32:24.963084] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:14:45.261 [2024-09-30 23:32:24.963112] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:45.261 [2024-09-30 23:32:24.965111] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:45.261 [2024-09-30 23:32:24.965177] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:14:45.261 BaseBdev3 00:14:45.261 23:32:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:45.261 23:32:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:45.261 23:32:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:14:45.261 23:32:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:45.261 23:32:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:45.261 BaseBdev4_malloc 00:14:45.261 23:32:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:45.261 23:32:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:14:45.261 23:32:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:45.261 23:32:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:45.261 [2024-09-30 23:32:24.991259] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:14:45.261 [2024-09-30 23:32:24.991359] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:45.261 [2024-09-30 23:32:24.991422] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:14:45.261 [2024-09-30 23:32:24.991450] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:45.261 [2024-09-30 23:32:24.993405] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:45.261 [2024-09-30 23:32:24.993466] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:14:45.261 BaseBdev4 00:14:45.261 23:32:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:45.261 23:32:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:14:45.261 23:32:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:45.261 23:32:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:45.261 spare_malloc 00:14:45.261 23:32:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:45.261 23:32:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:14:45.261 23:32:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:45.261 23:32:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:45.261 spare_delay 00:14:45.261 23:32:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:45.261 23:32:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:45.261 23:32:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:45.261 23:32:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:45.261 [2024-09-30 23:32:25.031517] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:45.261 [2024-09-30 23:32:25.031630] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:45.261 [2024-09-30 23:32:25.031668] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:14:45.261 [2024-09-30 23:32:25.031696] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:45.261 [2024-09-30 23:32:25.033680] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:45.261 [2024-09-30 23:32:25.033742] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:45.261 spare 00:14:45.261 23:32:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:45.261 23:32:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:14:45.261 23:32:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:45.261 23:32:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:45.261 [2024-09-30 23:32:25.043599] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:45.261 [2024-09-30 23:32:25.045369] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:45.261 [2024-09-30 23:32:25.045465] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:45.261 [2024-09-30 23:32:25.045520] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:45.261 [2024-09-30 23:32:25.045646] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:14:45.261 [2024-09-30 23:32:25.045701] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:14:45.261 [2024-09-30 23:32:25.045959] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:14:45.261 [2024-09-30 23:32:25.046422] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:14:45.261 [2024-09-30 23:32:25.046470] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:14:45.261 [2024-09-30 23:32:25.046631] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:45.261 23:32:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:45.261 23:32:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:14:45.261 23:32:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:45.261 23:32:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:45.261 23:32:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:45.261 23:32:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:45.261 23:32:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:45.261 23:32:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:45.261 23:32:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:45.261 23:32:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:45.261 23:32:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:45.261 23:32:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:45.261 23:32:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:45.262 23:32:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:45.262 23:32:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:45.262 23:32:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:45.262 23:32:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:45.262 "name": "raid_bdev1", 00:14:45.262 "uuid": "56b6cfc3-761a-45ae-ae98-8d696d5473b6", 00:14:45.262 "strip_size_kb": 64, 00:14:45.262 "state": "online", 00:14:45.262 "raid_level": "raid5f", 00:14:45.262 "superblock": false, 00:14:45.262 "num_base_bdevs": 4, 00:14:45.262 "num_base_bdevs_discovered": 4, 00:14:45.262 "num_base_bdevs_operational": 4, 00:14:45.262 "base_bdevs_list": [ 00:14:45.262 { 00:14:45.262 "name": "BaseBdev1", 00:14:45.262 "uuid": "58c9f42d-ff0e-5274-ab14-7148a6e72065", 00:14:45.262 "is_configured": true, 00:14:45.262 "data_offset": 0, 00:14:45.262 "data_size": 65536 00:14:45.262 }, 00:14:45.262 { 00:14:45.262 "name": "BaseBdev2", 00:14:45.262 "uuid": "3b6d3bf8-0596-5cbe-a8de-03082c73a5a5", 00:14:45.262 "is_configured": true, 00:14:45.262 "data_offset": 0, 00:14:45.262 "data_size": 65536 00:14:45.262 }, 00:14:45.262 { 00:14:45.262 "name": "BaseBdev3", 00:14:45.262 "uuid": "e13b46d8-1354-590c-9b7f-17ea547a0a92", 00:14:45.262 "is_configured": true, 00:14:45.262 "data_offset": 0, 00:14:45.262 "data_size": 65536 00:14:45.262 }, 00:14:45.262 { 00:14:45.262 "name": "BaseBdev4", 00:14:45.262 "uuid": "dc7c442e-b683-5f63-90e7-2127db5a0de2", 00:14:45.262 "is_configured": true, 00:14:45.262 "data_offset": 0, 00:14:45.262 "data_size": 65536 00:14:45.262 } 00:14:45.262 ] 00:14:45.262 }' 00:14:45.262 23:32:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:45.262 23:32:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:45.830 23:32:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:45.830 23:32:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:45.830 23:32:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:45.830 23:32:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:14:45.830 [2024-09-30 23:32:25.455885] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:45.830 23:32:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:45.830 23:32:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=196608 00:14:45.830 23:32:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:45.830 23:32:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:14:45.830 23:32:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:45.830 23:32:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:45.830 23:32:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:45.830 23:32:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:14:45.830 23:32:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:14:45.830 23:32:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:14:45.830 23:32:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:14:45.831 23:32:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:14:45.831 23:32:25 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:45.831 23:32:25 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:14:45.831 23:32:25 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:45.831 23:32:25 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:14:45.831 23:32:25 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:45.831 23:32:25 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:14:45.831 23:32:25 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:45.831 23:32:25 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:45.831 23:32:25 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:14:46.090 [2024-09-30 23:32:25.723324] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:14:46.090 /dev/nbd0 00:14:46.090 23:32:25 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:46.090 23:32:25 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:46.090 23:32:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:14:46.090 23:32:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:14:46.090 23:32:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:14:46.090 23:32:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:14:46.090 23:32:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:14:46.090 23:32:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # break 00:14:46.090 23:32:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:14:46.090 23:32:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:14:46.090 23:32:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:46.090 1+0 records in 00:14:46.090 1+0 records out 00:14:46.090 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000361668 s, 11.3 MB/s 00:14:46.090 23:32:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:46.090 23:32:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:14:46.090 23:32:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:46.090 23:32:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:14:46.090 23:32:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:14:46.090 23:32:25 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:46.090 23:32:25 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:46.090 23:32:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:14:46.090 23:32:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@630 -- # write_unit_size=384 00:14:46.090 23:32:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@631 -- # echo 192 00:14:46.090 23:32:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=196608 count=512 oflag=direct 00:14:46.659 512+0 records in 00:14:46.659 512+0 records out 00:14:46.659 100663296 bytes (101 MB, 96 MiB) copied, 0.61084 s, 165 MB/s 00:14:46.659 23:32:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:14:46.659 23:32:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:46.659 23:32:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:14:46.659 23:32:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:46.659 23:32:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:14:46.659 23:32:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:46.659 23:32:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:46.919 [2024-09-30 23:32:26.612012] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:46.919 23:32:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:46.919 23:32:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:46.919 23:32:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:46.919 23:32:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:46.919 23:32:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:46.919 23:32:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:46.919 23:32:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:14:46.919 23:32:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:14:46.919 23:32:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:14:46.919 23:32:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:46.919 23:32:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:46.919 [2024-09-30 23:32:26.642795] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:46.919 23:32:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:46.919 23:32:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:14:46.919 23:32:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:46.919 23:32:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:46.919 23:32:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:46.919 23:32:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:46.919 23:32:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:46.919 23:32:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:46.919 23:32:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:46.919 23:32:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:46.919 23:32:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:46.919 23:32:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:46.919 23:32:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:46.919 23:32:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:46.919 23:32:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:46.919 23:32:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:46.919 23:32:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:46.919 "name": "raid_bdev1", 00:14:46.919 "uuid": "56b6cfc3-761a-45ae-ae98-8d696d5473b6", 00:14:46.919 "strip_size_kb": 64, 00:14:46.919 "state": "online", 00:14:46.919 "raid_level": "raid5f", 00:14:46.919 "superblock": false, 00:14:46.919 "num_base_bdevs": 4, 00:14:46.919 "num_base_bdevs_discovered": 3, 00:14:46.919 "num_base_bdevs_operational": 3, 00:14:46.919 "base_bdevs_list": [ 00:14:46.919 { 00:14:46.919 "name": null, 00:14:46.919 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:46.919 "is_configured": false, 00:14:46.919 "data_offset": 0, 00:14:46.919 "data_size": 65536 00:14:46.919 }, 00:14:46.919 { 00:14:46.919 "name": "BaseBdev2", 00:14:46.919 "uuid": "3b6d3bf8-0596-5cbe-a8de-03082c73a5a5", 00:14:46.919 "is_configured": true, 00:14:46.919 "data_offset": 0, 00:14:46.919 "data_size": 65536 00:14:46.919 }, 00:14:46.919 { 00:14:46.919 "name": "BaseBdev3", 00:14:46.919 "uuid": "e13b46d8-1354-590c-9b7f-17ea547a0a92", 00:14:46.919 "is_configured": true, 00:14:46.919 "data_offset": 0, 00:14:46.919 "data_size": 65536 00:14:46.919 }, 00:14:46.919 { 00:14:46.919 "name": "BaseBdev4", 00:14:46.919 "uuid": "dc7c442e-b683-5f63-90e7-2127db5a0de2", 00:14:46.919 "is_configured": true, 00:14:46.919 "data_offset": 0, 00:14:46.919 "data_size": 65536 00:14:46.919 } 00:14:46.919 ] 00:14:46.919 }' 00:14:46.920 23:32:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:46.920 23:32:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:47.488 23:32:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:47.488 23:32:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:47.488 23:32:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:47.488 [2024-09-30 23:32:27.106003] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:47.488 [2024-09-30 23:32:27.111816] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b5b0 00:14:47.488 23:32:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:47.488 23:32:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:14:47.488 [2024-09-30 23:32:27.114133] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:48.426 23:32:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:48.426 23:32:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:48.426 23:32:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:48.426 23:32:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:48.426 23:32:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:48.426 23:32:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:48.426 23:32:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:48.426 23:32:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:48.426 23:32:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:48.426 23:32:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:48.426 23:32:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:48.426 "name": "raid_bdev1", 00:14:48.426 "uuid": "56b6cfc3-761a-45ae-ae98-8d696d5473b6", 00:14:48.426 "strip_size_kb": 64, 00:14:48.426 "state": "online", 00:14:48.426 "raid_level": "raid5f", 00:14:48.426 "superblock": false, 00:14:48.426 "num_base_bdevs": 4, 00:14:48.426 "num_base_bdevs_discovered": 4, 00:14:48.426 "num_base_bdevs_operational": 4, 00:14:48.426 "process": { 00:14:48.426 "type": "rebuild", 00:14:48.426 "target": "spare", 00:14:48.426 "progress": { 00:14:48.426 "blocks": 19200, 00:14:48.426 "percent": 9 00:14:48.426 } 00:14:48.426 }, 00:14:48.426 "base_bdevs_list": [ 00:14:48.426 { 00:14:48.426 "name": "spare", 00:14:48.426 "uuid": "c7c443eb-ec11-58e3-8a0f-480351967f8f", 00:14:48.426 "is_configured": true, 00:14:48.426 "data_offset": 0, 00:14:48.426 "data_size": 65536 00:14:48.426 }, 00:14:48.426 { 00:14:48.426 "name": "BaseBdev2", 00:14:48.426 "uuid": "3b6d3bf8-0596-5cbe-a8de-03082c73a5a5", 00:14:48.426 "is_configured": true, 00:14:48.426 "data_offset": 0, 00:14:48.426 "data_size": 65536 00:14:48.426 }, 00:14:48.426 { 00:14:48.426 "name": "BaseBdev3", 00:14:48.426 "uuid": "e13b46d8-1354-590c-9b7f-17ea547a0a92", 00:14:48.426 "is_configured": true, 00:14:48.426 "data_offset": 0, 00:14:48.426 "data_size": 65536 00:14:48.426 }, 00:14:48.426 { 00:14:48.426 "name": "BaseBdev4", 00:14:48.426 "uuid": "dc7c442e-b683-5f63-90e7-2127db5a0de2", 00:14:48.426 "is_configured": true, 00:14:48.426 "data_offset": 0, 00:14:48.426 "data_size": 65536 00:14:48.426 } 00:14:48.426 ] 00:14:48.426 }' 00:14:48.426 23:32:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:48.426 23:32:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:48.426 23:32:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:48.426 23:32:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:48.426 23:32:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:14:48.426 23:32:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:48.426 23:32:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:48.426 [2024-09-30 23:32:28.253444] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:48.692 [2024-09-30 23:32:28.320566] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:48.693 [2024-09-30 23:32:28.320693] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:48.693 [2024-09-30 23:32:28.320742] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:48.693 [2024-09-30 23:32:28.320766] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:48.693 23:32:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:48.693 23:32:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:14:48.693 23:32:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:48.693 23:32:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:48.693 23:32:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:48.693 23:32:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:48.693 23:32:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:48.693 23:32:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:48.693 23:32:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:48.693 23:32:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:48.693 23:32:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:48.693 23:32:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:48.693 23:32:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:48.693 23:32:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:48.693 23:32:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:48.693 23:32:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:48.693 23:32:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:48.693 "name": "raid_bdev1", 00:14:48.693 "uuid": "56b6cfc3-761a-45ae-ae98-8d696d5473b6", 00:14:48.693 "strip_size_kb": 64, 00:14:48.693 "state": "online", 00:14:48.693 "raid_level": "raid5f", 00:14:48.693 "superblock": false, 00:14:48.693 "num_base_bdevs": 4, 00:14:48.693 "num_base_bdevs_discovered": 3, 00:14:48.693 "num_base_bdevs_operational": 3, 00:14:48.693 "base_bdevs_list": [ 00:14:48.693 { 00:14:48.693 "name": null, 00:14:48.693 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:48.693 "is_configured": false, 00:14:48.693 "data_offset": 0, 00:14:48.693 "data_size": 65536 00:14:48.693 }, 00:14:48.693 { 00:14:48.693 "name": "BaseBdev2", 00:14:48.693 "uuid": "3b6d3bf8-0596-5cbe-a8de-03082c73a5a5", 00:14:48.693 "is_configured": true, 00:14:48.693 "data_offset": 0, 00:14:48.693 "data_size": 65536 00:14:48.693 }, 00:14:48.693 { 00:14:48.693 "name": "BaseBdev3", 00:14:48.693 "uuid": "e13b46d8-1354-590c-9b7f-17ea547a0a92", 00:14:48.693 "is_configured": true, 00:14:48.693 "data_offset": 0, 00:14:48.693 "data_size": 65536 00:14:48.693 }, 00:14:48.693 { 00:14:48.693 "name": "BaseBdev4", 00:14:48.693 "uuid": "dc7c442e-b683-5f63-90e7-2127db5a0de2", 00:14:48.693 "is_configured": true, 00:14:48.693 "data_offset": 0, 00:14:48.693 "data_size": 65536 00:14:48.693 } 00:14:48.693 ] 00:14:48.693 }' 00:14:48.693 23:32:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:48.693 23:32:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:48.968 23:32:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:48.968 23:32:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:48.968 23:32:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:48.968 23:32:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:48.968 23:32:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:48.968 23:32:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:48.968 23:32:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:48.968 23:32:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:48.968 23:32:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:48.968 23:32:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:48.968 23:32:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:48.968 "name": "raid_bdev1", 00:14:48.968 "uuid": "56b6cfc3-761a-45ae-ae98-8d696d5473b6", 00:14:48.968 "strip_size_kb": 64, 00:14:48.968 "state": "online", 00:14:48.968 "raid_level": "raid5f", 00:14:48.968 "superblock": false, 00:14:48.968 "num_base_bdevs": 4, 00:14:48.968 "num_base_bdevs_discovered": 3, 00:14:48.968 "num_base_bdevs_operational": 3, 00:14:48.968 "base_bdevs_list": [ 00:14:48.968 { 00:14:48.968 "name": null, 00:14:48.968 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:48.968 "is_configured": false, 00:14:48.968 "data_offset": 0, 00:14:48.968 "data_size": 65536 00:14:48.968 }, 00:14:48.968 { 00:14:48.968 "name": "BaseBdev2", 00:14:48.968 "uuid": "3b6d3bf8-0596-5cbe-a8de-03082c73a5a5", 00:14:48.968 "is_configured": true, 00:14:48.968 "data_offset": 0, 00:14:48.968 "data_size": 65536 00:14:48.968 }, 00:14:48.968 { 00:14:48.968 "name": "BaseBdev3", 00:14:48.968 "uuid": "e13b46d8-1354-590c-9b7f-17ea547a0a92", 00:14:48.968 "is_configured": true, 00:14:48.968 "data_offset": 0, 00:14:48.968 "data_size": 65536 00:14:48.968 }, 00:14:48.968 { 00:14:48.968 "name": "BaseBdev4", 00:14:48.968 "uuid": "dc7c442e-b683-5f63-90e7-2127db5a0de2", 00:14:48.968 "is_configured": true, 00:14:48.968 "data_offset": 0, 00:14:48.968 "data_size": 65536 00:14:48.968 } 00:14:48.968 ] 00:14:48.968 }' 00:14:48.968 23:32:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:49.248 23:32:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:49.248 23:32:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:49.248 23:32:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:49.248 23:32:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:49.248 23:32:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:49.248 23:32:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:49.248 [2024-09-30 23:32:28.892650] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:49.248 [2024-09-30 23:32:28.897340] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b680 00:14:49.248 23:32:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:49.248 23:32:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:14:49.248 [2024-09-30 23:32:28.899732] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:50.196 23:32:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:50.196 23:32:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:50.196 23:32:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:50.196 23:32:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:50.196 23:32:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:50.196 23:32:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:50.196 23:32:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:50.196 23:32:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:50.196 23:32:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:50.196 23:32:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:50.196 23:32:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:50.196 "name": "raid_bdev1", 00:14:50.196 "uuid": "56b6cfc3-761a-45ae-ae98-8d696d5473b6", 00:14:50.196 "strip_size_kb": 64, 00:14:50.196 "state": "online", 00:14:50.196 "raid_level": "raid5f", 00:14:50.196 "superblock": false, 00:14:50.196 "num_base_bdevs": 4, 00:14:50.196 "num_base_bdevs_discovered": 4, 00:14:50.196 "num_base_bdevs_operational": 4, 00:14:50.196 "process": { 00:14:50.196 "type": "rebuild", 00:14:50.196 "target": "spare", 00:14:50.196 "progress": { 00:14:50.196 "blocks": 19200, 00:14:50.196 "percent": 9 00:14:50.196 } 00:14:50.196 }, 00:14:50.196 "base_bdevs_list": [ 00:14:50.196 { 00:14:50.196 "name": "spare", 00:14:50.196 "uuid": "c7c443eb-ec11-58e3-8a0f-480351967f8f", 00:14:50.196 "is_configured": true, 00:14:50.196 "data_offset": 0, 00:14:50.196 "data_size": 65536 00:14:50.196 }, 00:14:50.196 { 00:14:50.196 "name": "BaseBdev2", 00:14:50.196 "uuid": "3b6d3bf8-0596-5cbe-a8de-03082c73a5a5", 00:14:50.196 "is_configured": true, 00:14:50.196 "data_offset": 0, 00:14:50.196 "data_size": 65536 00:14:50.196 }, 00:14:50.196 { 00:14:50.196 "name": "BaseBdev3", 00:14:50.196 "uuid": "e13b46d8-1354-590c-9b7f-17ea547a0a92", 00:14:50.196 "is_configured": true, 00:14:50.196 "data_offset": 0, 00:14:50.196 "data_size": 65536 00:14:50.196 }, 00:14:50.196 { 00:14:50.196 "name": "BaseBdev4", 00:14:50.196 "uuid": "dc7c442e-b683-5f63-90e7-2127db5a0de2", 00:14:50.196 "is_configured": true, 00:14:50.196 "data_offset": 0, 00:14:50.196 "data_size": 65536 00:14:50.196 } 00:14:50.196 ] 00:14:50.197 }' 00:14:50.197 23:32:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:50.197 23:32:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:50.197 23:32:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:50.197 23:32:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:50.197 23:32:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:14:50.197 23:32:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:14:50.197 23:32:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:14:50.197 23:32:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=511 00:14:50.197 23:32:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:50.197 23:32:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:50.197 23:32:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:50.197 23:32:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:50.197 23:32:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:50.197 23:32:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:50.461 23:32:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:50.461 23:32:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:50.461 23:32:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:50.461 23:32:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:50.461 23:32:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:50.461 23:32:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:50.461 "name": "raid_bdev1", 00:14:50.461 "uuid": "56b6cfc3-761a-45ae-ae98-8d696d5473b6", 00:14:50.461 "strip_size_kb": 64, 00:14:50.461 "state": "online", 00:14:50.461 "raid_level": "raid5f", 00:14:50.461 "superblock": false, 00:14:50.461 "num_base_bdevs": 4, 00:14:50.461 "num_base_bdevs_discovered": 4, 00:14:50.461 "num_base_bdevs_operational": 4, 00:14:50.461 "process": { 00:14:50.461 "type": "rebuild", 00:14:50.461 "target": "spare", 00:14:50.461 "progress": { 00:14:50.461 "blocks": 21120, 00:14:50.461 "percent": 10 00:14:50.461 } 00:14:50.461 }, 00:14:50.461 "base_bdevs_list": [ 00:14:50.461 { 00:14:50.461 "name": "spare", 00:14:50.461 "uuid": "c7c443eb-ec11-58e3-8a0f-480351967f8f", 00:14:50.461 "is_configured": true, 00:14:50.461 "data_offset": 0, 00:14:50.461 "data_size": 65536 00:14:50.461 }, 00:14:50.461 { 00:14:50.461 "name": "BaseBdev2", 00:14:50.461 "uuid": "3b6d3bf8-0596-5cbe-a8de-03082c73a5a5", 00:14:50.461 "is_configured": true, 00:14:50.461 "data_offset": 0, 00:14:50.461 "data_size": 65536 00:14:50.461 }, 00:14:50.461 { 00:14:50.461 "name": "BaseBdev3", 00:14:50.461 "uuid": "e13b46d8-1354-590c-9b7f-17ea547a0a92", 00:14:50.461 "is_configured": true, 00:14:50.461 "data_offset": 0, 00:14:50.461 "data_size": 65536 00:14:50.461 }, 00:14:50.461 { 00:14:50.461 "name": "BaseBdev4", 00:14:50.461 "uuid": "dc7c442e-b683-5f63-90e7-2127db5a0de2", 00:14:50.461 "is_configured": true, 00:14:50.461 "data_offset": 0, 00:14:50.461 "data_size": 65536 00:14:50.461 } 00:14:50.461 ] 00:14:50.461 }' 00:14:50.461 23:32:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:50.461 23:32:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:50.461 23:32:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:50.461 23:32:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:50.461 23:32:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:51.401 23:32:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:51.401 23:32:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:51.401 23:32:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:51.401 23:32:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:51.401 23:32:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:51.401 23:32:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:51.401 23:32:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:51.401 23:32:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:51.401 23:32:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:51.401 23:32:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:51.401 23:32:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:51.401 23:32:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:51.401 "name": "raid_bdev1", 00:14:51.401 "uuid": "56b6cfc3-761a-45ae-ae98-8d696d5473b6", 00:14:51.401 "strip_size_kb": 64, 00:14:51.401 "state": "online", 00:14:51.401 "raid_level": "raid5f", 00:14:51.401 "superblock": false, 00:14:51.401 "num_base_bdevs": 4, 00:14:51.401 "num_base_bdevs_discovered": 4, 00:14:51.401 "num_base_bdevs_operational": 4, 00:14:51.401 "process": { 00:14:51.401 "type": "rebuild", 00:14:51.401 "target": "spare", 00:14:51.401 "progress": { 00:14:51.401 "blocks": 44160, 00:14:51.401 "percent": 22 00:14:51.401 } 00:14:51.401 }, 00:14:51.401 "base_bdevs_list": [ 00:14:51.401 { 00:14:51.401 "name": "spare", 00:14:51.401 "uuid": "c7c443eb-ec11-58e3-8a0f-480351967f8f", 00:14:51.401 "is_configured": true, 00:14:51.401 "data_offset": 0, 00:14:51.401 "data_size": 65536 00:14:51.401 }, 00:14:51.401 { 00:14:51.401 "name": "BaseBdev2", 00:14:51.401 "uuid": "3b6d3bf8-0596-5cbe-a8de-03082c73a5a5", 00:14:51.401 "is_configured": true, 00:14:51.401 "data_offset": 0, 00:14:51.401 "data_size": 65536 00:14:51.401 }, 00:14:51.401 { 00:14:51.401 "name": "BaseBdev3", 00:14:51.401 "uuid": "e13b46d8-1354-590c-9b7f-17ea547a0a92", 00:14:51.401 "is_configured": true, 00:14:51.401 "data_offset": 0, 00:14:51.401 "data_size": 65536 00:14:51.401 }, 00:14:51.401 { 00:14:51.401 "name": "BaseBdev4", 00:14:51.401 "uuid": "dc7c442e-b683-5f63-90e7-2127db5a0de2", 00:14:51.401 "is_configured": true, 00:14:51.401 "data_offset": 0, 00:14:51.401 "data_size": 65536 00:14:51.401 } 00:14:51.401 ] 00:14:51.401 }' 00:14:51.661 23:32:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:51.661 23:32:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:51.661 23:32:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:51.661 23:32:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:51.661 23:32:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:52.602 23:32:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:52.602 23:32:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:52.602 23:32:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:52.602 23:32:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:52.602 23:32:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:52.602 23:32:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:52.602 23:32:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:52.602 23:32:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:52.602 23:32:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:52.602 23:32:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:52.602 23:32:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:52.602 23:32:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:52.602 "name": "raid_bdev1", 00:14:52.602 "uuid": "56b6cfc3-761a-45ae-ae98-8d696d5473b6", 00:14:52.602 "strip_size_kb": 64, 00:14:52.602 "state": "online", 00:14:52.602 "raid_level": "raid5f", 00:14:52.602 "superblock": false, 00:14:52.602 "num_base_bdevs": 4, 00:14:52.602 "num_base_bdevs_discovered": 4, 00:14:52.602 "num_base_bdevs_operational": 4, 00:14:52.602 "process": { 00:14:52.602 "type": "rebuild", 00:14:52.602 "target": "spare", 00:14:52.602 "progress": { 00:14:52.602 "blocks": 65280, 00:14:52.602 "percent": 33 00:14:52.602 } 00:14:52.602 }, 00:14:52.602 "base_bdevs_list": [ 00:14:52.602 { 00:14:52.602 "name": "spare", 00:14:52.602 "uuid": "c7c443eb-ec11-58e3-8a0f-480351967f8f", 00:14:52.602 "is_configured": true, 00:14:52.602 "data_offset": 0, 00:14:52.602 "data_size": 65536 00:14:52.602 }, 00:14:52.602 { 00:14:52.602 "name": "BaseBdev2", 00:14:52.602 "uuid": "3b6d3bf8-0596-5cbe-a8de-03082c73a5a5", 00:14:52.602 "is_configured": true, 00:14:52.602 "data_offset": 0, 00:14:52.602 "data_size": 65536 00:14:52.602 }, 00:14:52.602 { 00:14:52.602 "name": "BaseBdev3", 00:14:52.602 "uuid": "e13b46d8-1354-590c-9b7f-17ea547a0a92", 00:14:52.602 "is_configured": true, 00:14:52.602 "data_offset": 0, 00:14:52.602 "data_size": 65536 00:14:52.602 }, 00:14:52.602 { 00:14:52.602 "name": "BaseBdev4", 00:14:52.602 "uuid": "dc7c442e-b683-5f63-90e7-2127db5a0de2", 00:14:52.602 "is_configured": true, 00:14:52.602 "data_offset": 0, 00:14:52.602 "data_size": 65536 00:14:52.602 } 00:14:52.602 ] 00:14:52.602 }' 00:14:52.602 23:32:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:52.602 23:32:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:52.602 23:32:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:52.862 23:32:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:52.862 23:32:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:53.803 23:32:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:53.803 23:32:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:53.803 23:32:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:53.803 23:32:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:53.803 23:32:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:53.803 23:32:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:53.803 23:32:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:53.803 23:32:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:53.803 23:32:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:53.803 23:32:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:53.803 23:32:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:53.803 23:32:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:53.803 "name": "raid_bdev1", 00:14:53.803 "uuid": "56b6cfc3-761a-45ae-ae98-8d696d5473b6", 00:14:53.803 "strip_size_kb": 64, 00:14:53.803 "state": "online", 00:14:53.803 "raid_level": "raid5f", 00:14:53.803 "superblock": false, 00:14:53.803 "num_base_bdevs": 4, 00:14:53.803 "num_base_bdevs_discovered": 4, 00:14:53.803 "num_base_bdevs_operational": 4, 00:14:53.803 "process": { 00:14:53.803 "type": "rebuild", 00:14:53.803 "target": "spare", 00:14:53.803 "progress": { 00:14:53.803 "blocks": 86400, 00:14:53.803 "percent": 43 00:14:53.803 } 00:14:53.803 }, 00:14:53.803 "base_bdevs_list": [ 00:14:53.803 { 00:14:53.803 "name": "spare", 00:14:53.803 "uuid": "c7c443eb-ec11-58e3-8a0f-480351967f8f", 00:14:53.803 "is_configured": true, 00:14:53.803 "data_offset": 0, 00:14:53.803 "data_size": 65536 00:14:53.803 }, 00:14:53.803 { 00:14:53.803 "name": "BaseBdev2", 00:14:53.803 "uuid": "3b6d3bf8-0596-5cbe-a8de-03082c73a5a5", 00:14:53.803 "is_configured": true, 00:14:53.803 "data_offset": 0, 00:14:53.803 "data_size": 65536 00:14:53.803 }, 00:14:53.803 { 00:14:53.803 "name": "BaseBdev3", 00:14:53.803 "uuid": "e13b46d8-1354-590c-9b7f-17ea547a0a92", 00:14:53.803 "is_configured": true, 00:14:53.803 "data_offset": 0, 00:14:53.803 "data_size": 65536 00:14:53.803 }, 00:14:53.803 { 00:14:53.803 "name": "BaseBdev4", 00:14:53.803 "uuid": "dc7c442e-b683-5f63-90e7-2127db5a0de2", 00:14:53.803 "is_configured": true, 00:14:53.803 "data_offset": 0, 00:14:53.803 "data_size": 65536 00:14:53.803 } 00:14:53.803 ] 00:14:53.803 }' 00:14:53.803 23:32:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:53.803 23:32:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:53.803 23:32:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:53.803 23:32:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:53.803 23:32:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:55.185 23:32:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:55.185 23:32:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:55.185 23:32:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:55.185 23:32:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:55.185 23:32:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:55.185 23:32:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:55.185 23:32:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:55.185 23:32:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:55.185 23:32:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:55.185 23:32:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:55.185 23:32:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:55.185 23:32:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:55.185 "name": "raid_bdev1", 00:14:55.185 "uuid": "56b6cfc3-761a-45ae-ae98-8d696d5473b6", 00:14:55.185 "strip_size_kb": 64, 00:14:55.185 "state": "online", 00:14:55.185 "raid_level": "raid5f", 00:14:55.185 "superblock": false, 00:14:55.185 "num_base_bdevs": 4, 00:14:55.185 "num_base_bdevs_discovered": 4, 00:14:55.185 "num_base_bdevs_operational": 4, 00:14:55.185 "process": { 00:14:55.185 "type": "rebuild", 00:14:55.185 "target": "spare", 00:14:55.185 "progress": { 00:14:55.185 "blocks": 107520, 00:14:55.185 "percent": 54 00:14:55.185 } 00:14:55.185 }, 00:14:55.185 "base_bdevs_list": [ 00:14:55.185 { 00:14:55.185 "name": "spare", 00:14:55.185 "uuid": "c7c443eb-ec11-58e3-8a0f-480351967f8f", 00:14:55.185 "is_configured": true, 00:14:55.185 "data_offset": 0, 00:14:55.185 "data_size": 65536 00:14:55.185 }, 00:14:55.185 { 00:14:55.185 "name": "BaseBdev2", 00:14:55.185 "uuid": "3b6d3bf8-0596-5cbe-a8de-03082c73a5a5", 00:14:55.185 "is_configured": true, 00:14:55.185 "data_offset": 0, 00:14:55.185 "data_size": 65536 00:14:55.185 }, 00:14:55.186 { 00:14:55.186 "name": "BaseBdev3", 00:14:55.186 "uuid": "e13b46d8-1354-590c-9b7f-17ea547a0a92", 00:14:55.186 "is_configured": true, 00:14:55.186 "data_offset": 0, 00:14:55.186 "data_size": 65536 00:14:55.186 }, 00:14:55.186 { 00:14:55.186 "name": "BaseBdev4", 00:14:55.186 "uuid": "dc7c442e-b683-5f63-90e7-2127db5a0de2", 00:14:55.186 "is_configured": true, 00:14:55.186 "data_offset": 0, 00:14:55.186 "data_size": 65536 00:14:55.186 } 00:14:55.186 ] 00:14:55.186 }' 00:14:55.186 23:32:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:55.186 23:32:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:55.186 23:32:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:55.186 23:32:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:55.186 23:32:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:56.125 23:32:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:56.125 23:32:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:56.125 23:32:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:56.125 23:32:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:56.125 23:32:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:56.125 23:32:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:56.125 23:32:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:56.125 23:32:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:56.125 23:32:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:56.125 23:32:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:56.125 23:32:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:56.125 23:32:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:56.125 "name": "raid_bdev1", 00:14:56.125 "uuid": "56b6cfc3-761a-45ae-ae98-8d696d5473b6", 00:14:56.125 "strip_size_kb": 64, 00:14:56.125 "state": "online", 00:14:56.125 "raid_level": "raid5f", 00:14:56.125 "superblock": false, 00:14:56.125 "num_base_bdevs": 4, 00:14:56.125 "num_base_bdevs_discovered": 4, 00:14:56.125 "num_base_bdevs_operational": 4, 00:14:56.125 "process": { 00:14:56.125 "type": "rebuild", 00:14:56.125 "target": "spare", 00:14:56.125 "progress": { 00:14:56.125 "blocks": 130560, 00:14:56.125 "percent": 66 00:14:56.125 } 00:14:56.125 }, 00:14:56.125 "base_bdevs_list": [ 00:14:56.125 { 00:14:56.125 "name": "spare", 00:14:56.125 "uuid": "c7c443eb-ec11-58e3-8a0f-480351967f8f", 00:14:56.125 "is_configured": true, 00:14:56.125 "data_offset": 0, 00:14:56.125 "data_size": 65536 00:14:56.125 }, 00:14:56.125 { 00:14:56.125 "name": "BaseBdev2", 00:14:56.125 "uuid": "3b6d3bf8-0596-5cbe-a8de-03082c73a5a5", 00:14:56.125 "is_configured": true, 00:14:56.125 "data_offset": 0, 00:14:56.125 "data_size": 65536 00:14:56.125 }, 00:14:56.125 { 00:14:56.125 "name": "BaseBdev3", 00:14:56.125 "uuid": "e13b46d8-1354-590c-9b7f-17ea547a0a92", 00:14:56.125 "is_configured": true, 00:14:56.125 "data_offset": 0, 00:14:56.125 "data_size": 65536 00:14:56.125 }, 00:14:56.125 { 00:14:56.125 "name": "BaseBdev4", 00:14:56.125 "uuid": "dc7c442e-b683-5f63-90e7-2127db5a0de2", 00:14:56.125 "is_configured": true, 00:14:56.125 "data_offset": 0, 00:14:56.125 "data_size": 65536 00:14:56.125 } 00:14:56.125 ] 00:14:56.125 }' 00:14:56.125 23:32:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:56.125 23:32:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:56.125 23:32:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:56.125 23:32:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:56.125 23:32:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:57.065 23:32:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:57.065 23:32:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:57.065 23:32:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:57.065 23:32:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:57.065 23:32:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:57.065 23:32:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:57.324 23:32:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:57.324 23:32:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:57.324 23:32:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:57.324 23:32:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:57.324 23:32:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:57.324 23:32:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:57.324 "name": "raid_bdev1", 00:14:57.324 "uuid": "56b6cfc3-761a-45ae-ae98-8d696d5473b6", 00:14:57.324 "strip_size_kb": 64, 00:14:57.324 "state": "online", 00:14:57.324 "raid_level": "raid5f", 00:14:57.324 "superblock": false, 00:14:57.324 "num_base_bdevs": 4, 00:14:57.324 "num_base_bdevs_discovered": 4, 00:14:57.324 "num_base_bdevs_operational": 4, 00:14:57.324 "process": { 00:14:57.324 "type": "rebuild", 00:14:57.324 "target": "spare", 00:14:57.324 "progress": { 00:14:57.324 "blocks": 151680, 00:14:57.324 "percent": 77 00:14:57.324 } 00:14:57.324 }, 00:14:57.324 "base_bdevs_list": [ 00:14:57.324 { 00:14:57.324 "name": "spare", 00:14:57.324 "uuid": "c7c443eb-ec11-58e3-8a0f-480351967f8f", 00:14:57.324 "is_configured": true, 00:14:57.324 "data_offset": 0, 00:14:57.324 "data_size": 65536 00:14:57.324 }, 00:14:57.324 { 00:14:57.324 "name": "BaseBdev2", 00:14:57.324 "uuid": "3b6d3bf8-0596-5cbe-a8de-03082c73a5a5", 00:14:57.324 "is_configured": true, 00:14:57.324 "data_offset": 0, 00:14:57.324 "data_size": 65536 00:14:57.324 }, 00:14:57.324 { 00:14:57.324 "name": "BaseBdev3", 00:14:57.324 "uuid": "e13b46d8-1354-590c-9b7f-17ea547a0a92", 00:14:57.324 "is_configured": true, 00:14:57.324 "data_offset": 0, 00:14:57.324 "data_size": 65536 00:14:57.324 }, 00:14:57.324 { 00:14:57.324 "name": "BaseBdev4", 00:14:57.324 "uuid": "dc7c442e-b683-5f63-90e7-2127db5a0de2", 00:14:57.324 "is_configured": true, 00:14:57.325 "data_offset": 0, 00:14:57.325 "data_size": 65536 00:14:57.325 } 00:14:57.325 ] 00:14:57.325 }' 00:14:57.325 23:32:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:57.325 23:32:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:57.325 23:32:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:57.325 23:32:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:57.325 23:32:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:58.264 23:32:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:58.264 23:32:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:58.264 23:32:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:58.264 23:32:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:58.264 23:32:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:58.264 23:32:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:58.264 23:32:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:58.264 23:32:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:58.264 23:32:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:58.264 23:32:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:58.264 23:32:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:58.264 23:32:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:58.264 "name": "raid_bdev1", 00:14:58.264 "uuid": "56b6cfc3-761a-45ae-ae98-8d696d5473b6", 00:14:58.264 "strip_size_kb": 64, 00:14:58.264 "state": "online", 00:14:58.264 "raid_level": "raid5f", 00:14:58.264 "superblock": false, 00:14:58.264 "num_base_bdevs": 4, 00:14:58.264 "num_base_bdevs_discovered": 4, 00:14:58.264 "num_base_bdevs_operational": 4, 00:14:58.264 "process": { 00:14:58.264 "type": "rebuild", 00:14:58.264 "target": "spare", 00:14:58.264 "progress": { 00:14:58.264 "blocks": 174720, 00:14:58.264 "percent": 88 00:14:58.264 } 00:14:58.264 }, 00:14:58.264 "base_bdevs_list": [ 00:14:58.264 { 00:14:58.264 "name": "spare", 00:14:58.264 "uuid": "c7c443eb-ec11-58e3-8a0f-480351967f8f", 00:14:58.264 "is_configured": true, 00:14:58.264 "data_offset": 0, 00:14:58.264 "data_size": 65536 00:14:58.264 }, 00:14:58.264 { 00:14:58.264 "name": "BaseBdev2", 00:14:58.264 "uuid": "3b6d3bf8-0596-5cbe-a8de-03082c73a5a5", 00:14:58.264 "is_configured": true, 00:14:58.264 "data_offset": 0, 00:14:58.264 "data_size": 65536 00:14:58.264 }, 00:14:58.264 { 00:14:58.264 "name": "BaseBdev3", 00:14:58.264 "uuid": "e13b46d8-1354-590c-9b7f-17ea547a0a92", 00:14:58.264 "is_configured": true, 00:14:58.264 "data_offset": 0, 00:14:58.264 "data_size": 65536 00:14:58.264 }, 00:14:58.264 { 00:14:58.264 "name": "BaseBdev4", 00:14:58.264 "uuid": "dc7c442e-b683-5f63-90e7-2127db5a0de2", 00:14:58.264 "is_configured": true, 00:14:58.264 "data_offset": 0, 00:14:58.264 "data_size": 65536 00:14:58.264 } 00:14:58.264 ] 00:14:58.264 }' 00:14:58.264 23:32:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:58.524 23:32:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:58.524 23:32:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:58.524 23:32:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:58.524 23:32:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:59.463 23:32:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:59.463 23:32:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:59.463 23:32:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:59.463 23:32:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:59.463 23:32:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:59.463 23:32:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:59.463 23:32:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:59.463 23:32:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:59.463 23:32:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:59.463 23:32:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:59.463 23:32:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:59.463 [2024-09-30 23:32:39.248147] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:14:59.463 [2024-09-30 23:32:39.248275] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:14:59.463 [2024-09-30 23:32:39.248339] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:59.463 23:32:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:59.463 "name": "raid_bdev1", 00:14:59.463 "uuid": "56b6cfc3-761a-45ae-ae98-8d696d5473b6", 00:14:59.463 "strip_size_kb": 64, 00:14:59.463 "state": "online", 00:14:59.463 "raid_level": "raid5f", 00:14:59.463 "superblock": false, 00:14:59.463 "num_base_bdevs": 4, 00:14:59.463 "num_base_bdevs_discovered": 4, 00:14:59.463 "num_base_bdevs_operational": 4, 00:14:59.463 "process": { 00:14:59.463 "type": "rebuild", 00:14:59.463 "target": "spare", 00:14:59.463 "progress": { 00:14:59.463 "blocks": 195840, 00:14:59.463 "percent": 99 00:14:59.463 } 00:14:59.463 }, 00:14:59.463 "base_bdevs_list": [ 00:14:59.463 { 00:14:59.463 "name": "spare", 00:14:59.463 "uuid": "c7c443eb-ec11-58e3-8a0f-480351967f8f", 00:14:59.463 "is_configured": true, 00:14:59.463 "data_offset": 0, 00:14:59.463 "data_size": 65536 00:14:59.463 }, 00:14:59.463 { 00:14:59.463 "name": "BaseBdev2", 00:14:59.463 "uuid": "3b6d3bf8-0596-5cbe-a8de-03082c73a5a5", 00:14:59.463 "is_configured": true, 00:14:59.463 "data_offset": 0, 00:14:59.463 "data_size": 65536 00:14:59.463 }, 00:14:59.463 { 00:14:59.463 "name": "BaseBdev3", 00:14:59.463 "uuid": "e13b46d8-1354-590c-9b7f-17ea547a0a92", 00:14:59.463 "is_configured": true, 00:14:59.463 "data_offset": 0, 00:14:59.463 "data_size": 65536 00:14:59.463 }, 00:14:59.463 { 00:14:59.463 "name": "BaseBdev4", 00:14:59.463 "uuid": "dc7c442e-b683-5f63-90e7-2127db5a0de2", 00:14:59.463 "is_configured": true, 00:14:59.463 "data_offset": 0, 00:14:59.463 "data_size": 65536 00:14:59.463 } 00:14:59.463 ] 00:14:59.463 }' 00:14:59.463 23:32:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:59.463 23:32:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:59.463 23:32:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:59.722 23:32:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:59.722 23:32:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:00.661 23:32:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:00.661 23:32:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:00.661 23:32:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:00.661 23:32:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:00.661 23:32:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:00.661 23:32:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:00.661 23:32:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:00.661 23:32:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:00.661 23:32:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:00.661 23:32:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:00.661 23:32:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:00.661 23:32:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:00.661 "name": "raid_bdev1", 00:15:00.661 "uuid": "56b6cfc3-761a-45ae-ae98-8d696d5473b6", 00:15:00.661 "strip_size_kb": 64, 00:15:00.661 "state": "online", 00:15:00.661 "raid_level": "raid5f", 00:15:00.661 "superblock": false, 00:15:00.661 "num_base_bdevs": 4, 00:15:00.661 "num_base_bdevs_discovered": 4, 00:15:00.661 "num_base_bdevs_operational": 4, 00:15:00.661 "base_bdevs_list": [ 00:15:00.661 { 00:15:00.661 "name": "spare", 00:15:00.661 "uuid": "c7c443eb-ec11-58e3-8a0f-480351967f8f", 00:15:00.661 "is_configured": true, 00:15:00.661 "data_offset": 0, 00:15:00.661 "data_size": 65536 00:15:00.661 }, 00:15:00.661 { 00:15:00.661 "name": "BaseBdev2", 00:15:00.661 "uuid": "3b6d3bf8-0596-5cbe-a8de-03082c73a5a5", 00:15:00.661 "is_configured": true, 00:15:00.661 "data_offset": 0, 00:15:00.661 "data_size": 65536 00:15:00.661 }, 00:15:00.661 { 00:15:00.661 "name": "BaseBdev3", 00:15:00.661 "uuid": "e13b46d8-1354-590c-9b7f-17ea547a0a92", 00:15:00.661 "is_configured": true, 00:15:00.661 "data_offset": 0, 00:15:00.661 "data_size": 65536 00:15:00.661 }, 00:15:00.661 { 00:15:00.661 "name": "BaseBdev4", 00:15:00.661 "uuid": "dc7c442e-b683-5f63-90e7-2127db5a0de2", 00:15:00.661 "is_configured": true, 00:15:00.661 "data_offset": 0, 00:15:00.661 "data_size": 65536 00:15:00.661 } 00:15:00.661 ] 00:15:00.661 }' 00:15:00.661 23:32:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:00.661 23:32:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:15:00.661 23:32:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:00.661 23:32:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:15:00.661 23:32:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:15:00.661 23:32:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:00.661 23:32:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:00.661 23:32:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:00.661 23:32:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:00.661 23:32:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:00.661 23:32:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:00.661 23:32:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:00.661 23:32:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:00.661 23:32:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:00.661 23:32:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:00.921 23:32:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:00.921 "name": "raid_bdev1", 00:15:00.921 "uuid": "56b6cfc3-761a-45ae-ae98-8d696d5473b6", 00:15:00.921 "strip_size_kb": 64, 00:15:00.921 "state": "online", 00:15:00.921 "raid_level": "raid5f", 00:15:00.921 "superblock": false, 00:15:00.921 "num_base_bdevs": 4, 00:15:00.921 "num_base_bdevs_discovered": 4, 00:15:00.921 "num_base_bdevs_operational": 4, 00:15:00.921 "base_bdevs_list": [ 00:15:00.921 { 00:15:00.921 "name": "spare", 00:15:00.921 "uuid": "c7c443eb-ec11-58e3-8a0f-480351967f8f", 00:15:00.921 "is_configured": true, 00:15:00.921 "data_offset": 0, 00:15:00.921 "data_size": 65536 00:15:00.921 }, 00:15:00.921 { 00:15:00.921 "name": "BaseBdev2", 00:15:00.921 "uuid": "3b6d3bf8-0596-5cbe-a8de-03082c73a5a5", 00:15:00.921 "is_configured": true, 00:15:00.921 "data_offset": 0, 00:15:00.921 "data_size": 65536 00:15:00.921 }, 00:15:00.921 { 00:15:00.921 "name": "BaseBdev3", 00:15:00.921 "uuid": "e13b46d8-1354-590c-9b7f-17ea547a0a92", 00:15:00.921 "is_configured": true, 00:15:00.921 "data_offset": 0, 00:15:00.921 "data_size": 65536 00:15:00.921 }, 00:15:00.921 { 00:15:00.921 "name": "BaseBdev4", 00:15:00.921 "uuid": "dc7c442e-b683-5f63-90e7-2127db5a0de2", 00:15:00.921 "is_configured": true, 00:15:00.921 "data_offset": 0, 00:15:00.921 "data_size": 65536 00:15:00.921 } 00:15:00.921 ] 00:15:00.921 }' 00:15:00.921 23:32:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:00.921 23:32:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:00.921 23:32:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:00.921 23:32:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:00.921 23:32:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:15:00.921 23:32:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:00.921 23:32:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:00.921 23:32:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:00.921 23:32:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:00.921 23:32:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:00.921 23:32:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:00.921 23:32:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:00.921 23:32:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:00.921 23:32:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:00.921 23:32:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:00.921 23:32:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:00.921 23:32:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:00.921 23:32:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:00.921 23:32:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:00.921 23:32:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:00.921 "name": "raid_bdev1", 00:15:00.921 "uuid": "56b6cfc3-761a-45ae-ae98-8d696d5473b6", 00:15:00.921 "strip_size_kb": 64, 00:15:00.922 "state": "online", 00:15:00.922 "raid_level": "raid5f", 00:15:00.922 "superblock": false, 00:15:00.922 "num_base_bdevs": 4, 00:15:00.922 "num_base_bdevs_discovered": 4, 00:15:00.922 "num_base_bdevs_operational": 4, 00:15:00.922 "base_bdevs_list": [ 00:15:00.922 { 00:15:00.922 "name": "spare", 00:15:00.922 "uuid": "c7c443eb-ec11-58e3-8a0f-480351967f8f", 00:15:00.922 "is_configured": true, 00:15:00.922 "data_offset": 0, 00:15:00.922 "data_size": 65536 00:15:00.922 }, 00:15:00.922 { 00:15:00.922 "name": "BaseBdev2", 00:15:00.922 "uuid": "3b6d3bf8-0596-5cbe-a8de-03082c73a5a5", 00:15:00.922 "is_configured": true, 00:15:00.922 "data_offset": 0, 00:15:00.922 "data_size": 65536 00:15:00.922 }, 00:15:00.922 { 00:15:00.922 "name": "BaseBdev3", 00:15:00.922 "uuid": "e13b46d8-1354-590c-9b7f-17ea547a0a92", 00:15:00.922 "is_configured": true, 00:15:00.922 "data_offset": 0, 00:15:00.922 "data_size": 65536 00:15:00.922 }, 00:15:00.922 { 00:15:00.922 "name": "BaseBdev4", 00:15:00.922 "uuid": "dc7c442e-b683-5f63-90e7-2127db5a0de2", 00:15:00.922 "is_configured": true, 00:15:00.922 "data_offset": 0, 00:15:00.922 "data_size": 65536 00:15:00.922 } 00:15:00.922 ] 00:15:00.922 }' 00:15:00.922 23:32:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:00.922 23:32:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:01.491 23:32:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:01.491 23:32:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.491 23:32:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:01.491 [2024-09-30 23:32:41.073852] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:01.491 [2024-09-30 23:32:41.073934] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:01.491 [2024-09-30 23:32:41.074035] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:01.491 [2024-09-30 23:32:41.074150] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:01.491 [2024-09-30 23:32:41.074218] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:15:01.491 23:32:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.491 23:32:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:01.491 23:32:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:15:01.491 23:32:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.491 23:32:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:01.491 23:32:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.491 23:32:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:15:01.491 23:32:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:15:01.491 23:32:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:15:01.491 23:32:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:15:01.491 23:32:41 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:01.491 23:32:41 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:15:01.491 23:32:41 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:01.491 23:32:41 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:01.491 23:32:41 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:01.491 23:32:41 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:15:01.491 23:32:41 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:01.491 23:32:41 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:01.492 23:32:41 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:15:01.492 /dev/nbd0 00:15:01.751 23:32:41 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:01.751 23:32:41 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:01.751 23:32:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:15:01.751 23:32:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:15:01.751 23:32:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:15:01.751 23:32:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:15:01.751 23:32:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:15:01.751 23:32:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # break 00:15:01.751 23:32:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:15:01.751 23:32:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:15:01.751 23:32:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:01.751 1+0 records in 00:15:01.751 1+0 records out 00:15:01.751 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000513351 s, 8.0 MB/s 00:15:01.751 23:32:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:01.751 23:32:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:15:01.751 23:32:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:01.751 23:32:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:15:01.751 23:32:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:15:01.751 23:32:41 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:01.751 23:32:41 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:01.751 23:32:41 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:15:01.751 /dev/nbd1 00:15:01.751 23:32:41 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:15:02.011 23:32:41 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:15:02.011 23:32:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:15:02.011 23:32:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:15:02.011 23:32:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:15:02.011 23:32:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:15:02.011 23:32:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:15:02.011 23:32:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # break 00:15:02.011 23:32:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:15:02.011 23:32:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:15:02.011 23:32:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:02.011 1+0 records in 00:15:02.011 1+0 records out 00:15:02.011 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000443347 s, 9.2 MB/s 00:15:02.011 23:32:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:02.011 23:32:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:15:02.011 23:32:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:02.011 23:32:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:15:02.011 23:32:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:15:02.011 23:32:41 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:02.011 23:32:41 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:02.011 23:32:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:15:02.011 23:32:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:15:02.011 23:32:41 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:02.011 23:32:41 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:02.011 23:32:41 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:02.011 23:32:41 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:15:02.011 23:32:41 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:02.011 23:32:41 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:02.271 23:32:41 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:02.271 23:32:41 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:02.271 23:32:41 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:02.271 23:32:41 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:02.271 23:32:41 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:02.271 23:32:41 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:02.271 23:32:41 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:15:02.271 23:32:41 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:15:02.271 23:32:41 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:02.271 23:32:41 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:15:02.271 23:32:42 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:02.271 23:32:42 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:02.271 23:32:42 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:02.271 23:32:42 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:02.271 23:32:42 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:02.271 23:32:42 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:02.530 23:32:42 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:15:02.531 23:32:42 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:15:02.531 23:32:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:15:02.531 23:32:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 95059 00:15:02.531 23:32:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@950 -- # '[' -z 95059 ']' 00:15:02.531 23:32:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@954 -- # kill -0 95059 00:15:02.531 23:32:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@955 -- # uname 00:15:02.531 23:32:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:02.531 23:32:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 95059 00:15:02.531 killing process with pid 95059 00:15:02.531 Received shutdown signal, test time was about 60.000000 seconds 00:15:02.531 00:15:02.531 Latency(us) 00:15:02.531 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:02.531 =================================================================================================================== 00:15:02.531 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:15:02.531 23:32:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:02.531 23:32:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:02.531 23:32:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 95059' 00:15:02.531 23:32:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@969 -- # kill 95059 00:15:02.531 [2024-09-30 23:32:42.183854] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:02.531 23:32:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@974 -- # wait 95059 00:15:02.531 [2024-09-30 23:32:42.276698] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:02.791 23:32:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:15:02.791 00:15:02.791 real 0m18.674s 00:15:02.791 user 0m22.366s 00:15:02.791 sys 0m2.472s 00:15:02.791 ************************************ 00:15:02.791 END TEST raid5f_rebuild_test 00:15:02.791 ************************************ 00:15:02.791 23:32:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:02.791 23:32:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:03.051 23:32:42 bdev_raid -- bdev/bdev_raid.sh@991 -- # run_test raid5f_rebuild_test_sb raid_rebuild_test raid5f 4 true false true 00:15:03.051 23:32:42 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:15:03.051 23:32:42 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:03.051 23:32:42 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:03.051 ************************************ 00:15:03.051 START TEST raid5f_rebuild_test_sb 00:15:03.051 ************************************ 00:15:03.051 23:32:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid5f 4 true false true 00:15:03.051 23:32:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:15:03.051 23:32:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:15:03.051 23:32:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:15:03.051 23:32:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:15:03.051 23:32:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:15:03.051 23:32:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:15:03.051 23:32:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:03.051 23:32:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:15:03.051 23:32:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:03.051 23:32:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:03.051 23:32:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:15:03.051 23:32:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:03.051 23:32:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:03.051 23:32:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:15:03.051 23:32:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:03.051 23:32:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:03.051 23:32:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:15:03.051 23:32:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:03.051 23:32:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:03.051 23:32:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:15:03.051 23:32:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:15:03.051 23:32:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:15:03.051 23:32:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:15:03.051 23:32:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:15:03.051 23:32:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:15:03.051 23:32:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:15:03.051 23:32:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:15:03.051 23:32:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:15:03.051 23:32:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:15:03.051 23:32:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:15:03.051 23:32:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:15:03.051 23:32:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:15:03.051 23:32:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=95564 00:15:03.051 23:32:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:15:03.051 23:32:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 95564 00:15:03.051 23:32:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@831 -- # '[' -z 95564 ']' 00:15:03.051 23:32:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:03.051 23:32:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:03.051 23:32:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:03.051 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:03.051 23:32:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:03.051 23:32:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:03.051 [2024-09-30 23:32:42.815088] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:15:03.051 [2024-09-30 23:32:42.815303] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.ealI/O size of 3145728 is greater than zero copy threshold (65536). 00:15:03.051 Zero copy mechanism will not be used. 00:15:03.051 :6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95564 ] 00:15:03.310 [2024-09-30 23:32:42.974206] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:03.311 [2024-09-30 23:32:43.043667] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:15:03.311 [2024-09-30 23:32:43.119456] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:03.311 [2024-09-30 23:32:43.119613] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:03.880 23:32:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:03.881 23:32:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@864 -- # return 0 00:15:03.881 23:32:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:03.881 23:32:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:15:03.881 23:32:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:03.881 23:32:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:03.881 BaseBdev1_malloc 00:15:03.881 23:32:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:03.881 23:32:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:15:03.881 23:32:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:03.881 23:32:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:03.881 [2024-09-30 23:32:43.661891] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:15:03.881 [2024-09-30 23:32:43.662024] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:03.881 [2024-09-30 23:32:43.662069] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:15:03.881 [2024-09-30 23:32:43.662106] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:03.881 [2024-09-30 23:32:43.664538] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:03.881 [2024-09-30 23:32:43.664626] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:03.881 BaseBdev1 00:15:03.881 23:32:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:03.881 23:32:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:03.881 23:32:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:15:03.881 23:32:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:03.881 23:32:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:03.881 BaseBdev2_malloc 00:15:03.881 23:32:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:03.881 23:32:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:15:03.881 23:32:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:03.881 23:32:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:03.881 [2024-09-30 23:32:43.711999] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:15:03.881 [2024-09-30 23:32:43.712198] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:03.881 [2024-09-30 23:32:43.712253] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:15:03.881 [2024-09-30 23:32:43.712275] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:03.881 [2024-09-30 23:32:43.716354] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:03.881 [2024-09-30 23:32:43.716407] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:15:03.881 BaseBdev2 00:15:03.881 23:32:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:03.881 23:32:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:03.881 23:32:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:15:03.881 23:32:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:03.881 23:32:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:04.142 BaseBdev3_malloc 00:15:04.142 23:32:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:04.142 23:32:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:15:04.142 23:32:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:04.142 23:32:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:04.142 [2024-09-30 23:32:43.748462] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:15:04.142 [2024-09-30 23:32:43.748513] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:04.142 [2024-09-30 23:32:43.748555] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:15:04.142 [2024-09-30 23:32:43.748564] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:04.142 [2024-09-30 23:32:43.750812] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:04.142 [2024-09-30 23:32:43.750847] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:15:04.142 BaseBdev3 00:15:04.142 23:32:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:04.142 23:32:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:04.142 23:32:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:15:04.142 23:32:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:04.142 23:32:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:04.142 BaseBdev4_malloc 00:15:04.142 23:32:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:04.142 23:32:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:15:04.142 23:32:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:04.142 23:32:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:04.142 [2024-09-30 23:32:43.782795] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:15:04.142 [2024-09-30 23:32:43.782950] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:04.142 [2024-09-30 23:32:43.782982] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:15:04.142 [2024-09-30 23:32:43.782991] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:04.142 [2024-09-30 23:32:43.785349] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:04.142 [2024-09-30 23:32:43.785385] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:15:04.142 BaseBdev4 00:15:04.142 23:32:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:04.142 23:32:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:15:04.142 23:32:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:04.142 23:32:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:04.142 spare_malloc 00:15:04.142 23:32:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:04.142 23:32:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:15:04.142 23:32:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:04.142 23:32:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:04.142 spare_delay 00:15:04.142 23:32:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:04.142 23:32:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:04.142 23:32:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:04.142 23:32:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:04.142 [2024-09-30 23:32:43.829078] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:04.142 [2024-09-30 23:32:43.829178] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:04.142 [2024-09-30 23:32:43.829203] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:15:04.142 [2024-09-30 23:32:43.829212] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:04.142 [2024-09-30 23:32:43.831499] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:04.142 [2024-09-30 23:32:43.831535] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:04.142 spare 00:15:04.142 23:32:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:04.142 23:32:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:15:04.142 23:32:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:04.142 23:32:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:04.142 [2024-09-30 23:32:43.841168] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:04.142 [2024-09-30 23:32:43.843147] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:04.142 [2024-09-30 23:32:43.843210] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:04.142 [2024-09-30 23:32:43.843246] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:04.142 [2024-09-30 23:32:43.843432] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:15:04.142 [2024-09-30 23:32:43.843444] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:15:04.142 [2024-09-30 23:32:43.843683] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:15:04.142 [2024-09-30 23:32:43.844155] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:15:04.142 [2024-09-30 23:32:43.844174] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:15:04.142 [2024-09-30 23:32:43.844287] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:04.142 23:32:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:04.142 23:32:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:15:04.142 23:32:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:04.142 23:32:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:04.142 23:32:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:04.142 23:32:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:04.142 23:32:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:04.142 23:32:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:04.142 23:32:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:04.142 23:32:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:04.143 23:32:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:04.143 23:32:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:04.143 23:32:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:04.143 23:32:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:04.143 23:32:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:04.143 23:32:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:04.143 23:32:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:04.143 "name": "raid_bdev1", 00:15:04.143 "uuid": "4739503c-d95f-4bf8-90d1-59aa58bb25dc", 00:15:04.143 "strip_size_kb": 64, 00:15:04.143 "state": "online", 00:15:04.143 "raid_level": "raid5f", 00:15:04.143 "superblock": true, 00:15:04.143 "num_base_bdevs": 4, 00:15:04.143 "num_base_bdevs_discovered": 4, 00:15:04.143 "num_base_bdevs_operational": 4, 00:15:04.143 "base_bdevs_list": [ 00:15:04.143 { 00:15:04.143 "name": "BaseBdev1", 00:15:04.143 "uuid": "1e5aa47a-6764-5bfb-b9df-1ffa54fa5833", 00:15:04.143 "is_configured": true, 00:15:04.143 "data_offset": 2048, 00:15:04.143 "data_size": 63488 00:15:04.143 }, 00:15:04.143 { 00:15:04.143 "name": "BaseBdev2", 00:15:04.143 "uuid": "193e9a7b-9182-50e7-805f-c38604cdbcdf", 00:15:04.143 "is_configured": true, 00:15:04.143 "data_offset": 2048, 00:15:04.143 "data_size": 63488 00:15:04.143 }, 00:15:04.143 { 00:15:04.143 "name": "BaseBdev3", 00:15:04.143 "uuid": "a0620dce-3958-58a0-b44c-6be6c48203fe", 00:15:04.143 "is_configured": true, 00:15:04.143 "data_offset": 2048, 00:15:04.143 "data_size": 63488 00:15:04.143 }, 00:15:04.143 { 00:15:04.143 "name": "BaseBdev4", 00:15:04.143 "uuid": "c6118539-9caf-50ac-84c3-dcc8bf29351c", 00:15:04.143 "is_configured": true, 00:15:04.143 "data_offset": 2048, 00:15:04.143 "data_size": 63488 00:15:04.143 } 00:15:04.143 ] 00:15:04.143 }' 00:15:04.143 23:32:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:04.143 23:32:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:04.712 23:32:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:15:04.712 23:32:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:04.712 23:32:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:04.712 23:32:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:04.712 [2024-09-30 23:32:44.282631] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:04.712 23:32:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:04.712 23:32:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=190464 00:15:04.712 23:32:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:15:04.712 23:32:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:04.712 23:32:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:04.712 23:32:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:04.712 23:32:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:04.712 23:32:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:15:04.712 23:32:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:15:04.712 23:32:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:15:04.712 23:32:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:15:04.712 23:32:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:15:04.712 23:32:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:04.712 23:32:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:15:04.712 23:32:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:04.712 23:32:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:15:04.712 23:32:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:04.712 23:32:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:15:04.712 23:32:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:04.712 23:32:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:04.712 23:32:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:15:04.712 [2024-09-30 23:32:44.530098] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:15:04.712 /dev/nbd0 00:15:04.972 23:32:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:04.972 23:32:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:04.972 23:32:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:15:04.972 23:32:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:15:04.972 23:32:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:15:04.972 23:32:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:15:04.972 23:32:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:15:04.972 23:32:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:15:04.972 23:32:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:15:04.972 23:32:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:15:04.972 23:32:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:04.972 1+0 records in 00:15:04.972 1+0 records out 00:15:04.972 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000469215 s, 8.7 MB/s 00:15:04.972 23:32:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:04.972 23:32:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:15:04.972 23:32:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:04.972 23:32:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:15:04.972 23:32:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:15:04.972 23:32:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:04.972 23:32:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:04.972 23:32:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:15:04.972 23:32:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@630 -- # write_unit_size=384 00:15:04.972 23:32:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # echo 192 00:15:04.972 23:32:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=196608 count=496 oflag=direct 00:15:05.540 496+0 records in 00:15:05.540 496+0 records out 00:15:05.540 97517568 bytes (98 MB, 93 MiB) copied, 0.69958 s, 139 MB/s 00:15:05.540 23:32:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:15:05.540 23:32:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:05.540 23:32:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:15:05.540 23:32:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:05.540 23:32:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:15:05.540 23:32:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:05.540 23:32:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:05.799 [2024-09-30 23:32:45.524913] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:05.799 23:32:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:05.800 23:32:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:05.800 23:32:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:05.800 23:32:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:05.800 23:32:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:05.800 23:32:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:05.800 23:32:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:15:05.800 23:32:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:15:05.800 23:32:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:15:05.800 23:32:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:05.800 23:32:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:05.800 [2024-09-30 23:32:45.552972] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:05.800 23:32:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:05.800 23:32:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:05.800 23:32:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:05.800 23:32:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:05.800 23:32:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:05.800 23:32:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:05.800 23:32:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:05.800 23:32:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:05.800 23:32:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:05.800 23:32:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:05.800 23:32:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:05.800 23:32:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:05.800 23:32:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:05.800 23:32:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:05.800 23:32:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:05.800 23:32:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:05.800 23:32:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:05.800 "name": "raid_bdev1", 00:15:05.800 "uuid": "4739503c-d95f-4bf8-90d1-59aa58bb25dc", 00:15:05.800 "strip_size_kb": 64, 00:15:05.800 "state": "online", 00:15:05.800 "raid_level": "raid5f", 00:15:05.800 "superblock": true, 00:15:05.800 "num_base_bdevs": 4, 00:15:05.800 "num_base_bdevs_discovered": 3, 00:15:05.800 "num_base_bdevs_operational": 3, 00:15:05.800 "base_bdevs_list": [ 00:15:05.800 { 00:15:05.800 "name": null, 00:15:05.800 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:05.800 "is_configured": false, 00:15:05.800 "data_offset": 0, 00:15:05.800 "data_size": 63488 00:15:05.800 }, 00:15:05.800 { 00:15:05.800 "name": "BaseBdev2", 00:15:05.800 "uuid": "193e9a7b-9182-50e7-805f-c38604cdbcdf", 00:15:05.800 "is_configured": true, 00:15:05.800 "data_offset": 2048, 00:15:05.800 "data_size": 63488 00:15:05.800 }, 00:15:05.800 { 00:15:05.800 "name": "BaseBdev3", 00:15:05.800 "uuid": "a0620dce-3958-58a0-b44c-6be6c48203fe", 00:15:05.800 "is_configured": true, 00:15:05.800 "data_offset": 2048, 00:15:05.800 "data_size": 63488 00:15:05.800 }, 00:15:05.800 { 00:15:05.800 "name": "BaseBdev4", 00:15:05.800 "uuid": "c6118539-9caf-50ac-84c3-dcc8bf29351c", 00:15:05.800 "is_configured": true, 00:15:05.800 "data_offset": 2048, 00:15:05.800 "data_size": 63488 00:15:05.800 } 00:15:05.800 ] 00:15:05.800 }' 00:15:05.800 23:32:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:05.800 23:32:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:06.368 23:32:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:06.368 23:32:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:06.368 23:32:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:06.368 [2024-09-30 23:32:45.992220] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:06.368 [2024-09-30 23:32:45.997939] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002a8b0 00:15:06.368 23:32:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:06.368 23:32:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:15:06.368 [2024-09-30 23:32:46.000413] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:07.306 23:32:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:07.306 23:32:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:07.306 23:32:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:07.306 23:32:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:07.306 23:32:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:07.306 23:32:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:07.306 23:32:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:07.306 23:32:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:07.306 23:32:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:07.306 23:32:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:07.306 23:32:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:07.306 "name": "raid_bdev1", 00:15:07.306 "uuid": "4739503c-d95f-4bf8-90d1-59aa58bb25dc", 00:15:07.306 "strip_size_kb": 64, 00:15:07.306 "state": "online", 00:15:07.306 "raid_level": "raid5f", 00:15:07.306 "superblock": true, 00:15:07.306 "num_base_bdevs": 4, 00:15:07.306 "num_base_bdevs_discovered": 4, 00:15:07.306 "num_base_bdevs_operational": 4, 00:15:07.306 "process": { 00:15:07.306 "type": "rebuild", 00:15:07.306 "target": "spare", 00:15:07.306 "progress": { 00:15:07.306 "blocks": 19200, 00:15:07.306 "percent": 10 00:15:07.306 } 00:15:07.306 }, 00:15:07.306 "base_bdevs_list": [ 00:15:07.306 { 00:15:07.306 "name": "spare", 00:15:07.306 "uuid": "c50969b1-f3a1-5c56-b65e-853dd5c24927", 00:15:07.306 "is_configured": true, 00:15:07.306 "data_offset": 2048, 00:15:07.306 "data_size": 63488 00:15:07.306 }, 00:15:07.306 { 00:15:07.307 "name": "BaseBdev2", 00:15:07.307 "uuid": "193e9a7b-9182-50e7-805f-c38604cdbcdf", 00:15:07.307 "is_configured": true, 00:15:07.307 "data_offset": 2048, 00:15:07.307 "data_size": 63488 00:15:07.307 }, 00:15:07.307 { 00:15:07.307 "name": "BaseBdev3", 00:15:07.307 "uuid": "a0620dce-3958-58a0-b44c-6be6c48203fe", 00:15:07.307 "is_configured": true, 00:15:07.307 "data_offset": 2048, 00:15:07.307 "data_size": 63488 00:15:07.307 }, 00:15:07.307 { 00:15:07.307 "name": "BaseBdev4", 00:15:07.307 "uuid": "c6118539-9caf-50ac-84c3-dcc8bf29351c", 00:15:07.307 "is_configured": true, 00:15:07.307 "data_offset": 2048, 00:15:07.307 "data_size": 63488 00:15:07.307 } 00:15:07.307 ] 00:15:07.307 }' 00:15:07.307 23:32:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:07.307 23:32:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:07.307 23:32:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:07.307 23:32:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:07.307 23:32:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:15:07.307 23:32:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:07.307 23:32:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:07.307 [2024-09-30 23:32:47.155678] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:07.566 [2024-09-30 23:32:47.206891] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:07.566 [2024-09-30 23:32:47.206951] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:07.566 [2024-09-30 23:32:47.206970] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:07.566 [2024-09-30 23:32:47.206981] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:07.566 23:32:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:07.566 23:32:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:07.566 23:32:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:07.566 23:32:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:07.566 23:32:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:07.566 23:32:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:07.566 23:32:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:07.566 23:32:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:07.566 23:32:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:07.566 23:32:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:07.566 23:32:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:07.566 23:32:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:07.566 23:32:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:07.567 23:32:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:07.567 23:32:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:07.567 23:32:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:07.567 23:32:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:07.567 "name": "raid_bdev1", 00:15:07.567 "uuid": "4739503c-d95f-4bf8-90d1-59aa58bb25dc", 00:15:07.567 "strip_size_kb": 64, 00:15:07.567 "state": "online", 00:15:07.567 "raid_level": "raid5f", 00:15:07.567 "superblock": true, 00:15:07.567 "num_base_bdevs": 4, 00:15:07.567 "num_base_bdevs_discovered": 3, 00:15:07.567 "num_base_bdevs_operational": 3, 00:15:07.567 "base_bdevs_list": [ 00:15:07.567 { 00:15:07.567 "name": null, 00:15:07.567 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:07.567 "is_configured": false, 00:15:07.567 "data_offset": 0, 00:15:07.567 "data_size": 63488 00:15:07.567 }, 00:15:07.567 { 00:15:07.567 "name": "BaseBdev2", 00:15:07.567 "uuid": "193e9a7b-9182-50e7-805f-c38604cdbcdf", 00:15:07.567 "is_configured": true, 00:15:07.567 "data_offset": 2048, 00:15:07.567 "data_size": 63488 00:15:07.567 }, 00:15:07.567 { 00:15:07.567 "name": "BaseBdev3", 00:15:07.567 "uuid": "a0620dce-3958-58a0-b44c-6be6c48203fe", 00:15:07.567 "is_configured": true, 00:15:07.567 "data_offset": 2048, 00:15:07.567 "data_size": 63488 00:15:07.567 }, 00:15:07.567 { 00:15:07.567 "name": "BaseBdev4", 00:15:07.567 "uuid": "c6118539-9caf-50ac-84c3-dcc8bf29351c", 00:15:07.567 "is_configured": true, 00:15:07.567 "data_offset": 2048, 00:15:07.567 "data_size": 63488 00:15:07.567 } 00:15:07.567 ] 00:15:07.567 }' 00:15:07.567 23:32:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:07.567 23:32:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:07.826 23:32:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:07.826 23:32:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:07.826 23:32:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:07.826 23:32:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:07.826 23:32:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:07.826 23:32:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:07.826 23:32:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:07.826 23:32:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:07.826 23:32:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:08.087 23:32:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:08.087 23:32:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:08.087 "name": "raid_bdev1", 00:15:08.087 "uuid": "4739503c-d95f-4bf8-90d1-59aa58bb25dc", 00:15:08.087 "strip_size_kb": 64, 00:15:08.087 "state": "online", 00:15:08.087 "raid_level": "raid5f", 00:15:08.087 "superblock": true, 00:15:08.087 "num_base_bdevs": 4, 00:15:08.087 "num_base_bdevs_discovered": 3, 00:15:08.087 "num_base_bdevs_operational": 3, 00:15:08.087 "base_bdevs_list": [ 00:15:08.087 { 00:15:08.087 "name": null, 00:15:08.087 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:08.087 "is_configured": false, 00:15:08.087 "data_offset": 0, 00:15:08.087 "data_size": 63488 00:15:08.087 }, 00:15:08.087 { 00:15:08.087 "name": "BaseBdev2", 00:15:08.087 "uuid": "193e9a7b-9182-50e7-805f-c38604cdbcdf", 00:15:08.087 "is_configured": true, 00:15:08.087 "data_offset": 2048, 00:15:08.087 "data_size": 63488 00:15:08.087 }, 00:15:08.087 { 00:15:08.087 "name": "BaseBdev3", 00:15:08.087 "uuid": "a0620dce-3958-58a0-b44c-6be6c48203fe", 00:15:08.087 "is_configured": true, 00:15:08.087 "data_offset": 2048, 00:15:08.087 "data_size": 63488 00:15:08.087 }, 00:15:08.087 { 00:15:08.087 "name": "BaseBdev4", 00:15:08.087 "uuid": "c6118539-9caf-50ac-84c3-dcc8bf29351c", 00:15:08.087 "is_configured": true, 00:15:08.087 "data_offset": 2048, 00:15:08.087 "data_size": 63488 00:15:08.087 } 00:15:08.087 ] 00:15:08.087 }' 00:15:08.087 23:32:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:08.087 23:32:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:08.087 23:32:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:08.087 23:32:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:08.087 23:32:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:08.087 23:32:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:08.087 23:32:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:08.087 [2024-09-30 23:32:47.807035] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:08.087 [2024-09-30 23:32:47.812063] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002a980 00:15:08.087 23:32:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:08.087 23:32:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:15:08.087 [2024-09-30 23:32:47.814526] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:09.025 23:32:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:09.026 23:32:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:09.026 23:32:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:09.026 23:32:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:09.026 23:32:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:09.026 23:32:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:09.026 23:32:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:09.026 23:32:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:09.026 23:32:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:09.026 23:32:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:09.026 23:32:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:09.026 "name": "raid_bdev1", 00:15:09.026 "uuid": "4739503c-d95f-4bf8-90d1-59aa58bb25dc", 00:15:09.026 "strip_size_kb": 64, 00:15:09.026 "state": "online", 00:15:09.026 "raid_level": "raid5f", 00:15:09.026 "superblock": true, 00:15:09.026 "num_base_bdevs": 4, 00:15:09.026 "num_base_bdevs_discovered": 4, 00:15:09.026 "num_base_bdevs_operational": 4, 00:15:09.026 "process": { 00:15:09.026 "type": "rebuild", 00:15:09.026 "target": "spare", 00:15:09.026 "progress": { 00:15:09.026 "blocks": 19200, 00:15:09.026 "percent": 10 00:15:09.026 } 00:15:09.026 }, 00:15:09.026 "base_bdevs_list": [ 00:15:09.026 { 00:15:09.026 "name": "spare", 00:15:09.026 "uuid": "c50969b1-f3a1-5c56-b65e-853dd5c24927", 00:15:09.026 "is_configured": true, 00:15:09.026 "data_offset": 2048, 00:15:09.026 "data_size": 63488 00:15:09.026 }, 00:15:09.026 { 00:15:09.026 "name": "BaseBdev2", 00:15:09.026 "uuid": "193e9a7b-9182-50e7-805f-c38604cdbcdf", 00:15:09.026 "is_configured": true, 00:15:09.026 "data_offset": 2048, 00:15:09.026 "data_size": 63488 00:15:09.026 }, 00:15:09.026 { 00:15:09.026 "name": "BaseBdev3", 00:15:09.026 "uuid": "a0620dce-3958-58a0-b44c-6be6c48203fe", 00:15:09.026 "is_configured": true, 00:15:09.026 "data_offset": 2048, 00:15:09.026 "data_size": 63488 00:15:09.026 }, 00:15:09.026 { 00:15:09.026 "name": "BaseBdev4", 00:15:09.026 "uuid": "c6118539-9caf-50ac-84c3-dcc8bf29351c", 00:15:09.026 "is_configured": true, 00:15:09.026 "data_offset": 2048, 00:15:09.026 "data_size": 63488 00:15:09.026 } 00:15:09.026 ] 00:15:09.026 }' 00:15:09.026 23:32:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:09.286 23:32:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:09.286 23:32:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:09.286 23:32:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:09.286 23:32:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:15:09.286 23:32:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:15:09.286 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:15:09.286 23:32:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:15:09.286 23:32:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:15:09.286 23:32:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=529 00:15:09.286 23:32:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:09.286 23:32:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:09.286 23:32:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:09.286 23:32:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:09.286 23:32:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:09.286 23:32:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:09.286 23:32:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:09.286 23:32:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:09.286 23:32:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:09.286 23:32:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:09.286 23:32:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:09.286 23:32:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:09.286 "name": "raid_bdev1", 00:15:09.286 "uuid": "4739503c-d95f-4bf8-90d1-59aa58bb25dc", 00:15:09.286 "strip_size_kb": 64, 00:15:09.286 "state": "online", 00:15:09.286 "raid_level": "raid5f", 00:15:09.286 "superblock": true, 00:15:09.286 "num_base_bdevs": 4, 00:15:09.286 "num_base_bdevs_discovered": 4, 00:15:09.286 "num_base_bdevs_operational": 4, 00:15:09.286 "process": { 00:15:09.286 "type": "rebuild", 00:15:09.286 "target": "spare", 00:15:09.286 "progress": { 00:15:09.286 "blocks": 21120, 00:15:09.286 "percent": 11 00:15:09.286 } 00:15:09.286 }, 00:15:09.286 "base_bdevs_list": [ 00:15:09.286 { 00:15:09.286 "name": "spare", 00:15:09.286 "uuid": "c50969b1-f3a1-5c56-b65e-853dd5c24927", 00:15:09.286 "is_configured": true, 00:15:09.286 "data_offset": 2048, 00:15:09.286 "data_size": 63488 00:15:09.286 }, 00:15:09.286 { 00:15:09.286 "name": "BaseBdev2", 00:15:09.286 "uuid": "193e9a7b-9182-50e7-805f-c38604cdbcdf", 00:15:09.286 "is_configured": true, 00:15:09.286 "data_offset": 2048, 00:15:09.286 "data_size": 63488 00:15:09.286 }, 00:15:09.286 { 00:15:09.286 "name": "BaseBdev3", 00:15:09.286 "uuid": "a0620dce-3958-58a0-b44c-6be6c48203fe", 00:15:09.286 "is_configured": true, 00:15:09.286 "data_offset": 2048, 00:15:09.286 "data_size": 63488 00:15:09.286 }, 00:15:09.286 { 00:15:09.286 "name": "BaseBdev4", 00:15:09.286 "uuid": "c6118539-9caf-50ac-84c3-dcc8bf29351c", 00:15:09.286 "is_configured": true, 00:15:09.286 "data_offset": 2048, 00:15:09.286 "data_size": 63488 00:15:09.286 } 00:15:09.286 ] 00:15:09.286 }' 00:15:09.286 23:32:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:09.286 23:32:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:09.286 23:32:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:09.286 23:32:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:09.286 23:32:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:10.666 23:32:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:10.666 23:32:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:10.666 23:32:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:10.666 23:32:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:10.666 23:32:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:10.666 23:32:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:10.666 23:32:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:10.666 23:32:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:10.666 23:32:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:10.666 23:32:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:10.666 23:32:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:10.666 23:32:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:10.666 "name": "raid_bdev1", 00:15:10.666 "uuid": "4739503c-d95f-4bf8-90d1-59aa58bb25dc", 00:15:10.666 "strip_size_kb": 64, 00:15:10.666 "state": "online", 00:15:10.666 "raid_level": "raid5f", 00:15:10.666 "superblock": true, 00:15:10.666 "num_base_bdevs": 4, 00:15:10.666 "num_base_bdevs_discovered": 4, 00:15:10.666 "num_base_bdevs_operational": 4, 00:15:10.666 "process": { 00:15:10.666 "type": "rebuild", 00:15:10.666 "target": "spare", 00:15:10.666 "progress": { 00:15:10.666 "blocks": 42240, 00:15:10.666 "percent": 22 00:15:10.666 } 00:15:10.666 }, 00:15:10.666 "base_bdevs_list": [ 00:15:10.666 { 00:15:10.666 "name": "spare", 00:15:10.666 "uuid": "c50969b1-f3a1-5c56-b65e-853dd5c24927", 00:15:10.666 "is_configured": true, 00:15:10.666 "data_offset": 2048, 00:15:10.666 "data_size": 63488 00:15:10.666 }, 00:15:10.666 { 00:15:10.666 "name": "BaseBdev2", 00:15:10.666 "uuid": "193e9a7b-9182-50e7-805f-c38604cdbcdf", 00:15:10.666 "is_configured": true, 00:15:10.666 "data_offset": 2048, 00:15:10.666 "data_size": 63488 00:15:10.666 }, 00:15:10.666 { 00:15:10.666 "name": "BaseBdev3", 00:15:10.666 "uuid": "a0620dce-3958-58a0-b44c-6be6c48203fe", 00:15:10.666 "is_configured": true, 00:15:10.666 "data_offset": 2048, 00:15:10.666 "data_size": 63488 00:15:10.666 }, 00:15:10.666 { 00:15:10.666 "name": "BaseBdev4", 00:15:10.666 "uuid": "c6118539-9caf-50ac-84c3-dcc8bf29351c", 00:15:10.666 "is_configured": true, 00:15:10.666 "data_offset": 2048, 00:15:10.666 "data_size": 63488 00:15:10.666 } 00:15:10.666 ] 00:15:10.666 }' 00:15:10.666 23:32:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:10.666 23:32:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:10.666 23:32:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:10.666 23:32:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:10.666 23:32:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:11.605 23:32:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:11.605 23:32:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:11.605 23:32:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:11.605 23:32:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:11.605 23:32:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:11.605 23:32:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:11.605 23:32:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:11.605 23:32:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:11.605 23:32:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:11.605 23:32:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:11.605 23:32:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:11.605 23:32:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:11.605 "name": "raid_bdev1", 00:15:11.605 "uuid": "4739503c-d95f-4bf8-90d1-59aa58bb25dc", 00:15:11.605 "strip_size_kb": 64, 00:15:11.605 "state": "online", 00:15:11.605 "raid_level": "raid5f", 00:15:11.605 "superblock": true, 00:15:11.605 "num_base_bdevs": 4, 00:15:11.605 "num_base_bdevs_discovered": 4, 00:15:11.605 "num_base_bdevs_operational": 4, 00:15:11.605 "process": { 00:15:11.605 "type": "rebuild", 00:15:11.605 "target": "spare", 00:15:11.605 "progress": { 00:15:11.605 "blocks": 65280, 00:15:11.605 "percent": 34 00:15:11.605 } 00:15:11.605 }, 00:15:11.605 "base_bdevs_list": [ 00:15:11.605 { 00:15:11.605 "name": "spare", 00:15:11.605 "uuid": "c50969b1-f3a1-5c56-b65e-853dd5c24927", 00:15:11.605 "is_configured": true, 00:15:11.605 "data_offset": 2048, 00:15:11.605 "data_size": 63488 00:15:11.605 }, 00:15:11.605 { 00:15:11.605 "name": "BaseBdev2", 00:15:11.605 "uuid": "193e9a7b-9182-50e7-805f-c38604cdbcdf", 00:15:11.605 "is_configured": true, 00:15:11.605 "data_offset": 2048, 00:15:11.605 "data_size": 63488 00:15:11.605 }, 00:15:11.605 { 00:15:11.605 "name": "BaseBdev3", 00:15:11.605 "uuid": "a0620dce-3958-58a0-b44c-6be6c48203fe", 00:15:11.605 "is_configured": true, 00:15:11.605 "data_offset": 2048, 00:15:11.605 "data_size": 63488 00:15:11.605 }, 00:15:11.605 { 00:15:11.605 "name": "BaseBdev4", 00:15:11.605 "uuid": "c6118539-9caf-50ac-84c3-dcc8bf29351c", 00:15:11.605 "is_configured": true, 00:15:11.605 "data_offset": 2048, 00:15:11.605 "data_size": 63488 00:15:11.605 } 00:15:11.605 ] 00:15:11.605 }' 00:15:11.605 23:32:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:11.605 23:32:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:11.605 23:32:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:11.605 23:32:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:11.605 23:32:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:12.983 23:32:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:12.983 23:32:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:12.983 23:32:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:12.983 23:32:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:12.983 23:32:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:12.983 23:32:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:12.983 23:32:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:12.983 23:32:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:12.983 23:32:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:12.983 23:32:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:12.983 23:32:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:12.983 23:32:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:12.983 "name": "raid_bdev1", 00:15:12.983 "uuid": "4739503c-d95f-4bf8-90d1-59aa58bb25dc", 00:15:12.983 "strip_size_kb": 64, 00:15:12.983 "state": "online", 00:15:12.983 "raid_level": "raid5f", 00:15:12.983 "superblock": true, 00:15:12.983 "num_base_bdevs": 4, 00:15:12.983 "num_base_bdevs_discovered": 4, 00:15:12.983 "num_base_bdevs_operational": 4, 00:15:12.983 "process": { 00:15:12.983 "type": "rebuild", 00:15:12.983 "target": "spare", 00:15:12.983 "progress": { 00:15:12.983 "blocks": 86400, 00:15:12.983 "percent": 45 00:15:12.983 } 00:15:12.983 }, 00:15:12.983 "base_bdevs_list": [ 00:15:12.983 { 00:15:12.983 "name": "spare", 00:15:12.983 "uuid": "c50969b1-f3a1-5c56-b65e-853dd5c24927", 00:15:12.983 "is_configured": true, 00:15:12.983 "data_offset": 2048, 00:15:12.983 "data_size": 63488 00:15:12.983 }, 00:15:12.983 { 00:15:12.983 "name": "BaseBdev2", 00:15:12.983 "uuid": "193e9a7b-9182-50e7-805f-c38604cdbcdf", 00:15:12.983 "is_configured": true, 00:15:12.983 "data_offset": 2048, 00:15:12.983 "data_size": 63488 00:15:12.983 }, 00:15:12.983 { 00:15:12.983 "name": "BaseBdev3", 00:15:12.983 "uuid": "a0620dce-3958-58a0-b44c-6be6c48203fe", 00:15:12.983 "is_configured": true, 00:15:12.983 "data_offset": 2048, 00:15:12.983 "data_size": 63488 00:15:12.983 }, 00:15:12.983 { 00:15:12.983 "name": "BaseBdev4", 00:15:12.983 "uuid": "c6118539-9caf-50ac-84c3-dcc8bf29351c", 00:15:12.983 "is_configured": true, 00:15:12.983 "data_offset": 2048, 00:15:12.983 "data_size": 63488 00:15:12.983 } 00:15:12.983 ] 00:15:12.983 }' 00:15:12.983 23:32:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:12.983 23:32:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:12.983 23:32:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:12.983 23:32:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:12.983 23:32:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:13.922 23:32:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:13.922 23:32:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:13.922 23:32:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:13.922 23:32:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:13.922 23:32:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:13.922 23:32:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:13.922 23:32:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:13.922 23:32:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:13.922 23:32:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:13.922 23:32:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:13.922 23:32:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:13.922 23:32:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:13.922 "name": "raid_bdev1", 00:15:13.922 "uuid": "4739503c-d95f-4bf8-90d1-59aa58bb25dc", 00:15:13.922 "strip_size_kb": 64, 00:15:13.922 "state": "online", 00:15:13.922 "raid_level": "raid5f", 00:15:13.922 "superblock": true, 00:15:13.922 "num_base_bdevs": 4, 00:15:13.922 "num_base_bdevs_discovered": 4, 00:15:13.922 "num_base_bdevs_operational": 4, 00:15:13.922 "process": { 00:15:13.922 "type": "rebuild", 00:15:13.922 "target": "spare", 00:15:13.922 "progress": { 00:15:13.922 "blocks": 109440, 00:15:13.922 "percent": 57 00:15:13.922 } 00:15:13.922 }, 00:15:13.922 "base_bdevs_list": [ 00:15:13.922 { 00:15:13.922 "name": "spare", 00:15:13.922 "uuid": "c50969b1-f3a1-5c56-b65e-853dd5c24927", 00:15:13.922 "is_configured": true, 00:15:13.922 "data_offset": 2048, 00:15:13.922 "data_size": 63488 00:15:13.922 }, 00:15:13.922 { 00:15:13.922 "name": "BaseBdev2", 00:15:13.922 "uuid": "193e9a7b-9182-50e7-805f-c38604cdbcdf", 00:15:13.922 "is_configured": true, 00:15:13.922 "data_offset": 2048, 00:15:13.922 "data_size": 63488 00:15:13.922 }, 00:15:13.922 { 00:15:13.922 "name": "BaseBdev3", 00:15:13.922 "uuid": "a0620dce-3958-58a0-b44c-6be6c48203fe", 00:15:13.922 "is_configured": true, 00:15:13.922 "data_offset": 2048, 00:15:13.922 "data_size": 63488 00:15:13.922 }, 00:15:13.922 { 00:15:13.922 "name": "BaseBdev4", 00:15:13.922 "uuid": "c6118539-9caf-50ac-84c3-dcc8bf29351c", 00:15:13.922 "is_configured": true, 00:15:13.922 "data_offset": 2048, 00:15:13.922 "data_size": 63488 00:15:13.922 } 00:15:13.922 ] 00:15:13.922 }' 00:15:13.922 23:32:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:13.922 23:32:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:13.922 23:32:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:13.922 23:32:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:13.922 23:32:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:15.304 23:32:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:15.304 23:32:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:15.304 23:32:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:15.305 23:32:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:15.305 23:32:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:15.305 23:32:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:15.305 23:32:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:15.305 23:32:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:15.305 23:32:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:15.305 23:32:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:15.305 23:32:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:15.305 23:32:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:15.305 "name": "raid_bdev1", 00:15:15.305 "uuid": "4739503c-d95f-4bf8-90d1-59aa58bb25dc", 00:15:15.305 "strip_size_kb": 64, 00:15:15.305 "state": "online", 00:15:15.305 "raid_level": "raid5f", 00:15:15.305 "superblock": true, 00:15:15.305 "num_base_bdevs": 4, 00:15:15.305 "num_base_bdevs_discovered": 4, 00:15:15.305 "num_base_bdevs_operational": 4, 00:15:15.305 "process": { 00:15:15.305 "type": "rebuild", 00:15:15.305 "target": "spare", 00:15:15.305 "progress": { 00:15:15.305 "blocks": 130560, 00:15:15.305 "percent": 68 00:15:15.305 } 00:15:15.305 }, 00:15:15.305 "base_bdevs_list": [ 00:15:15.305 { 00:15:15.305 "name": "spare", 00:15:15.305 "uuid": "c50969b1-f3a1-5c56-b65e-853dd5c24927", 00:15:15.305 "is_configured": true, 00:15:15.305 "data_offset": 2048, 00:15:15.305 "data_size": 63488 00:15:15.305 }, 00:15:15.305 { 00:15:15.305 "name": "BaseBdev2", 00:15:15.305 "uuid": "193e9a7b-9182-50e7-805f-c38604cdbcdf", 00:15:15.305 "is_configured": true, 00:15:15.305 "data_offset": 2048, 00:15:15.305 "data_size": 63488 00:15:15.305 }, 00:15:15.305 { 00:15:15.305 "name": "BaseBdev3", 00:15:15.305 "uuid": "a0620dce-3958-58a0-b44c-6be6c48203fe", 00:15:15.305 "is_configured": true, 00:15:15.305 "data_offset": 2048, 00:15:15.305 "data_size": 63488 00:15:15.305 }, 00:15:15.305 { 00:15:15.305 "name": "BaseBdev4", 00:15:15.305 "uuid": "c6118539-9caf-50ac-84c3-dcc8bf29351c", 00:15:15.305 "is_configured": true, 00:15:15.305 "data_offset": 2048, 00:15:15.305 "data_size": 63488 00:15:15.305 } 00:15:15.305 ] 00:15:15.305 }' 00:15:15.305 23:32:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:15.305 23:32:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:15.305 23:32:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:15.305 23:32:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:15.305 23:32:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:16.245 23:32:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:16.245 23:32:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:16.245 23:32:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:16.245 23:32:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:16.245 23:32:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:16.245 23:32:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:16.245 23:32:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:16.245 23:32:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:16.245 23:32:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:16.245 23:32:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:16.245 23:32:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:16.245 23:32:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:16.245 "name": "raid_bdev1", 00:15:16.245 "uuid": "4739503c-d95f-4bf8-90d1-59aa58bb25dc", 00:15:16.245 "strip_size_kb": 64, 00:15:16.245 "state": "online", 00:15:16.245 "raid_level": "raid5f", 00:15:16.245 "superblock": true, 00:15:16.245 "num_base_bdevs": 4, 00:15:16.245 "num_base_bdevs_discovered": 4, 00:15:16.245 "num_base_bdevs_operational": 4, 00:15:16.245 "process": { 00:15:16.245 "type": "rebuild", 00:15:16.245 "target": "spare", 00:15:16.245 "progress": { 00:15:16.245 "blocks": 153600, 00:15:16.245 "percent": 80 00:15:16.245 } 00:15:16.245 }, 00:15:16.245 "base_bdevs_list": [ 00:15:16.245 { 00:15:16.245 "name": "spare", 00:15:16.245 "uuid": "c50969b1-f3a1-5c56-b65e-853dd5c24927", 00:15:16.245 "is_configured": true, 00:15:16.245 "data_offset": 2048, 00:15:16.245 "data_size": 63488 00:15:16.245 }, 00:15:16.245 { 00:15:16.245 "name": "BaseBdev2", 00:15:16.245 "uuid": "193e9a7b-9182-50e7-805f-c38604cdbcdf", 00:15:16.245 "is_configured": true, 00:15:16.245 "data_offset": 2048, 00:15:16.245 "data_size": 63488 00:15:16.245 }, 00:15:16.245 { 00:15:16.245 "name": "BaseBdev3", 00:15:16.245 "uuid": "a0620dce-3958-58a0-b44c-6be6c48203fe", 00:15:16.245 "is_configured": true, 00:15:16.245 "data_offset": 2048, 00:15:16.245 "data_size": 63488 00:15:16.245 }, 00:15:16.245 { 00:15:16.245 "name": "BaseBdev4", 00:15:16.245 "uuid": "c6118539-9caf-50ac-84c3-dcc8bf29351c", 00:15:16.245 "is_configured": true, 00:15:16.245 "data_offset": 2048, 00:15:16.245 "data_size": 63488 00:15:16.245 } 00:15:16.245 ] 00:15:16.245 }' 00:15:16.245 23:32:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:16.245 23:32:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:16.245 23:32:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:16.245 23:32:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:16.245 23:32:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:17.185 23:32:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:17.185 23:32:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:17.185 23:32:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:17.185 23:32:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:17.185 23:32:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:17.185 23:32:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:17.445 23:32:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:17.445 23:32:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:17.445 23:32:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:17.445 23:32:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:17.445 23:32:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:17.445 23:32:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:17.445 "name": "raid_bdev1", 00:15:17.445 "uuid": "4739503c-d95f-4bf8-90d1-59aa58bb25dc", 00:15:17.445 "strip_size_kb": 64, 00:15:17.445 "state": "online", 00:15:17.445 "raid_level": "raid5f", 00:15:17.445 "superblock": true, 00:15:17.445 "num_base_bdevs": 4, 00:15:17.445 "num_base_bdevs_discovered": 4, 00:15:17.445 "num_base_bdevs_operational": 4, 00:15:17.445 "process": { 00:15:17.445 "type": "rebuild", 00:15:17.445 "target": "spare", 00:15:17.445 "progress": { 00:15:17.445 "blocks": 174720, 00:15:17.445 "percent": 91 00:15:17.445 } 00:15:17.445 }, 00:15:17.445 "base_bdevs_list": [ 00:15:17.445 { 00:15:17.445 "name": "spare", 00:15:17.445 "uuid": "c50969b1-f3a1-5c56-b65e-853dd5c24927", 00:15:17.445 "is_configured": true, 00:15:17.445 "data_offset": 2048, 00:15:17.445 "data_size": 63488 00:15:17.445 }, 00:15:17.445 { 00:15:17.445 "name": "BaseBdev2", 00:15:17.445 "uuid": "193e9a7b-9182-50e7-805f-c38604cdbcdf", 00:15:17.445 "is_configured": true, 00:15:17.445 "data_offset": 2048, 00:15:17.445 "data_size": 63488 00:15:17.445 }, 00:15:17.445 { 00:15:17.445 "name": "BaseBdev3", 00:15:17.445 "uuid": "a0620dce-3958-58a0-b44c-6be6c48203fe", 00:15:17.445 "is_configured": true, 00:15:17.445 "data_offset": 2048, 00:15:17.445 "data_size": 63488 00:15:17.445 }, 00:15:17.445 { 00:15:17.445 "name": "BaseBdev4", 00:15:17.445 "uuid": "c6118539-9caf-50ac-84c3-dcc8bf29351c", 00:15:17.445 "is_configured": true, 00:15:17.445 "data_offset": 2048, 00:15:17.445 "data_size": 63488 00:15:17.445 } 00:15:17.445 ] 00:15:17.445 }' 00:15:17.445 23:32:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:17.445 23:32:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:17.445 23:32:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:17.445 23:32:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:17.445 23:32:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:18.014 [2024-09-30 23:32:57.862407] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:15:18.014 [2024-09-30 23:32:57.862495] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:15:18.014 [2024-09-30 23:32:57.862631] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:18.585 23:32:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:18.585 23:32:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:18.585 23:32:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:18.585 23:32:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:18.585 23:32:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:18.585 23:32:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:18.585 23:32:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:18.585 23:32:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:18.585 23:32:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:18.585 23:32:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:18.585 23:32:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:18.585 23:32:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:18.585 "name": "raid_bdev1", 00:15:18.585 "uuid": "4739503c-d95f-4bf8-90d1-59aa58bb25dc", 00:15:18.585 "strip_size_kb": 64, 00:15:18.585 "state": "online", 00:15:18.585 "raid_level": "raid5f", 00:15:18.585 "superblock": true, 00:15:18.585 "num_base_bdevs": 4, 00:15:18.585 "num_base_bdevs_discovered": 4, 00:15:18.585 "num_base_bdevs_operational": 4, 00:15:18.585 "base_bdevs_list": [ 00:15:18.585 { 00:15:18.585 "name": "spare", 00:15:18.585 "uuid": "c50969b1-f3a1-5c56-b65e-853dd5c24927", 00:15:18.585 "is_configured": true, 00:15:18.585 "data_offset": 2048, 00:15:18.585 "data_size": 63488 00:15:18.585 }, 00:15:18.585 { 00:15:18.585 "name": "BaseBdev2", 00:15:18.585 "uuid": "193e9a7b-9182-50e7-805f-c38604cdbcdf", 00:15:18.585 "is_configured": true, 00:15:18.585 "data_offset": 2048, 00:15:18.585 "data_size": 63488 00:15:18.585 }, 00:15:18.585 { 00:15:18.585 "name": "BaseBdev3", 00:15:18.585 "uuid": "a0620dce-3958-58a0-b44c-6be6c48203fe", 00:15:18.585 "is_configured": true, 00:15:18.585 "data_offset": 2048, 00:15:18.585 "data_size": 63488 00:15:18.585 }, 00:15:18.585 { 00:15:18.585 "name": "BaseBdev4", 00:15:18.585 "uuid": "c6118539-9caf-50ac-84c3-dcc8bf29351c", 00:15:18.585 "is_configured": true, 00:15:18.585 "data_offset": 2048, 00:15:18.585 "data_size": 63488 00:15:18.585 } 00:15:18.585 ] 00:15:18.585 }' 00:15:18.585 23:32:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:18.585 23:32:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:15:18.585 23:32:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:18.585 23:32:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:15:18.585 23:32:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:15:18.585 23:32:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:18.585 23:32:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:18.585 23:32:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:18.585 23:32:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:18.585 23:32:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:18.585 23:32:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:18.585 23:32:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:18.585 23:32:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:18.585 23:32:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:18.585 23:32:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:18.585 23:32:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:18.585 "name": "raid_bdev1", 00:15:18.585 "uuid": "4739503c-d95f-4bf8-90d1-59aa58bb25dc", 00:15:18.585 "strip_size_kb": 64, 00:15:18.585 "state": "online", 00:15:18.585 "raid_level": "raid5f", 00:15:18.585 "superblock": true, 00:15:18.585 "num_base_bdevs": 4, 00:15:18.585 "num_base_bdevs_discovered": 4, 00:15:18.585 "num_base_bdevs_operational": 4, 00:15:18.585 "base_bdevs_list": [ 00:15:18.585 { 00:15:18.585 "name": "spare", 00:15:18.585 "uuid": "c50969b1-f3a1-5c56-b65e-853dd5c24927", 00:15:18.585 "is_configured": true, 00:15:18.585 "data_offset": 2048, 00:15:18.585 "data_size": 63488 00:15:18.585 }, 00:15:18.585 { 00:15:18.586 "name": "BaseBdev2", 00:15:18.586 "uuid": "193e9a7b-9182-50e7-805f-c38604cdbcdf", 00:15:18.586 "is_configured": true, 00:15:18.586 "data_offset": 2048, 00:15:18.586 "data_size": 63488 00:15:18.586 }, 00:15:18.586 { 00:15:18.586 "name": "BaseBdev3", 00:15:18.586 "uuid": "a0620dce-3958-58a0-b44c-6be6c48203fe", 00:15:18.586 "is_configured": true, 00:15:18.586 "data_offset": 2048, 00:15:18.586 "data_size": 63488 00:15:18.586 }, 00:15:18.586 { 00:15:18.586 "name": "BaseBdev4", 00:15:18.586 "uuid": "c6118539-9caf-50ac-84c3-dcc8bf29351c", 00:15:18.586 "is_configured": true, 00:15:18.586 "data_offset": 2048, 00:15:18.586 "data_size": 63488 00:15:18.586 } 00:15:18.586 ] 00:15:18.586 }' 00:15:18.586 23:32:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:18.846 23:32:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:18.846 23:32:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:18.846 23:32:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:18.846 23:32:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:15:18.846 23:32:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:18.846 23:32:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:18.846 23:32:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:18.846 23:32:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:18.846 23:32:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:18.846 23:32:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:18.846 23:32:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:18.846 23:32:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:18.846 23:32:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:18.846 23:32:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:18.846 23:32:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:18.846 23:32:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:18.846 23:32:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:18.846 23:32:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:18.846 23:32:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:18.846 "name": "raid_bdev1", 00:15:18.846 "uuid": "4739503c-d95f-4bf8-90d1-59aa58bb25dc", 00:15:18.846 "strip_size_kb": 64, 00:15:18.846 "state": "online", 00:15:18.846 "raid_level": "raid5f", 00:15:18.846 "superblock": true, 00:15:18.846 "num_base_bdevs": 4, 00:15:18.846 "num_base_bdevs_discovered": 4, 00:15:18.846 "num_base_bdevs_operational": 4, 00:15:18.846 "base_bdevs_list": [ 00:15:18.846 { 00:15:18.846 "name": "spare", 00:15:18.846 "uuid": "c50969b1-f3a1-5c56-b65e-853dd5c24927", 00:15:18.846 "is_configured": true, 00:15:18.846 "data_offset": 2048, 00:15:18.846 "data_size": 63488 00:15:18.846 }, 00:15:18.846 { 00:15:18.846 "name": "BaseBdev2", 00:15:18.846 "uuid": "193e9a7b-9182-50e7-805f-c38604cdbcdf", 00:15:18.846 "is_configured": true, 00:15:18.846 "data_offset": 2048, 00:15:18.846 "data_size": 63488 00:15:18.846 }, 00:15:18.846 { 00:15:18.846 "name": "BaseBdev3", 00:15:18.846 "uuid": "a0620dce-3958-58a0-b44c-6be6c48203fe", 00:15:18.846 "is_configured": true, 00:15:18.846 "data_offset": 2048, 00:15:18.846 "data_size": 63488 00:15:18.846 }, 00:15:18.846 { 00:15:18.846 "name": "BaseBdev4", 00:15:18.846 "uuid": "c6118539-9caf-50ac-84c3-dcc8bf29351c", 00:15:18.846 "is_configured": true, 00:15:18.846 "data_offset": 2048, 00:15:18.846 "data_size": 63488 00:15:18.846 } 00:15:18.846 ] 00:15:18.846 }' 00:15:18.846 23:32:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:18.846 23:32:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:19.416 23:32:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:19.416 23:32:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:19.416 23:32:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:19.416 [2024-09-30 23:32:58.970049] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:19.416 [2024-09-30 23:32:58.970082] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:19.416 [2024-09-30 23:32:58.970164] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:19.416 [2024-09-30 23:32:58.970261] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:19.416 [2024-09-30 23:32:58.970282] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:15:19.416 23:32:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:19.416 23:32:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:19.416 23:32:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:15:19.416 23:32:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:19.416 23:32:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:19.416 23:32:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:19.416 23:32:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:15:19.416 23:32:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:15:19.416 23:32:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:15:19.416 23:32:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:15:19.416 23:32:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:19.416 23:32:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:15:19.416 23:32:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:19.416 23:32:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:19.416 23:32:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:19.416 23:32:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:15:19.416 23:32:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:19.416 23:32:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:19.416 23:32:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:15:19.416 /dev/nbd0 00:15:19.416 23:32:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:19.416 23:32:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:19.416 23:32:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:15:19.416 23:32:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:15:19.416 23:32:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:15:19.416 23:32:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:15:19.416 23:32:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:15:19.416 23:32:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:15:19.416 23:32:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:15:19.416 23:32:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:15:19.416 23:32:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:19.416 1+0 records in 00:15:19.416 1+0 records out 00:15:19.416 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000384236 s, 10.7 MB/s 00:15:19.416 23:32:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:19.416 23:32:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:15:19.416 23:32:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:19.675 23:32:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:15:19.675 23:32:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:15:19.675 23:32:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:19.675 23:32:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:19.675 23:32:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:15:19.675 /dev/nbd1 00:15:19.675 23:32:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:15:19.675 23:32:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:15:19.675 23:32:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:15:19.675 23:32:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:15:19.675 23:32:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:15:19.675 23:32:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:15:19.675 23:32:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:15:19.675 23:32:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:15:19.675 23:32:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:15:19.675 23:32:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:15:19.675 23:32:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:19.675 1+0 records in 00:15:19.675 1+0 records out 00:15:19.675 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000307251 s, 13.3 MB/s 00:15:19.675 23:32:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:19.675 23:32:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:15:19.675 23:32:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:19.675 23:32:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:15:19.675 23:32:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:15:19.675 23:32:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:19.675 23:32:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:19.675 23:32:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:15:19.934 23:32:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:15:19.934 23:32:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:19.934 23:32:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:19.934 23:32:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:19.934 23:32:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:15:19.934 23:32:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:19.934 23:32:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:20.193 23:32:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:20.193 23:32:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:20.193 23:32:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:20.193 23:32:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:20.193 23:32:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:20.193 23:32:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:20.193 23:32:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:15:20.193 23:32:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:15:20.193 23:32:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:20.193 23:32:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:15:20.193 23:33:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:20.193 23:33:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:20.193 23:33:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:20.193 23:33:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:20.193 23:33:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:20.193 23:33:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:20.193 23:33:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:15:20.193 23:33:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:15:20.193 23:33:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:15:20.193 23:33:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:15:20.193 23:33:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:20.193 23:33:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:20.193 23:33:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:20.193 23:33:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:20.193 23:33:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:20.193 23:33:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:20.453 [2024-09-30 23:33:00.047313] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:20.453 [2024-09-30 23:33:00.047734] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:20.453 [2024-09-30 23:33:00.047776] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:15:20.453 [2024-09-30 23:33:00.047790] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:20.453 [2024-09-30 23:33:00.050177] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:20.453 [2024-09-30 23:33:00.050218] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:20.453 [2024-09-30 23:33:00.050317] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:15:20.453 [2024-09-30 23:33:00.050369] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:20.453 [2024-09-30 23:33:00.050506] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:20.453 [2024-09-30 23:33:00.050613] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:20.453 [2024-09-30 23:33:00.050689] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:20.453 spare 00:15:20.453 23:33:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:20.453 23:33:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:15:20.453 23:33:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:20.453 23:33:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:20.453 [2024-09-30 23:33:00.150606] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006600 00:15:20.453 [2024-09-30 23:33:00.150631] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:15:20.453 [2024-09-30 23:33:00.150916] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000049030 00:15:20.453 [2024-09-30 23:33:00.151372] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006600 00:15:20.453 [2024-09-30 23:33:00.151393] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006600 00:15:20.453 [2024-09-30 23:33:00.151546] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:20.453 23:33:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:20.453 23:33:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:15:20.453 23:33:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:20.453 23:33:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:20.453 23:33:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:20.453 23:33:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:20.453 23:33:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:20.453 23:33:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:20.453 23:33:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:20.453 23:33:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:20.453 23:33:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:20.453 23:33:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:20.453 23:33:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:20.453 23:33:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:20.453 23:33:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:20.453 23:33:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:20.453 23:33:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:20.453 "name": "raid_bdev1", 00:15:20.453 "uuid": "4739503c-d95f-4bf8-90d1-59aa58bb25dc", 00:15:20.453 "strip_size_kb": 64, 00:15:20.453 "state": "online", 00:15:20.453 "raid_level": "raid5f", 00:15:20.453 "superblock": true, 00:15:20.453 "num_base_bdevs": 4, 00:15:20.453 "num_base_bdevs_discovered": 4, 00:15:20.453 "num_base_bdevs_operational": 4, 00:15:20.453 "base_bdevs_list": [ 00:15:20.453 { 00:15:20.453 "name": "spare", 00:15:20.453 "uuid": "c50969b1-f3a1-5c56-b65e-853dd5c24927", 00:15:20.453 "is_configured": true, 00:15:20.453 "data_offset": 2048, 00:15:20.453 "data_size": 63488 00:15:20.453 }, 00:15:20.453 { 00:15:20.453 "name": "BaseBdev2", 00:15:20.453 "uuid": "193e9a7b-9182-50e7-805f-c38604cdbcdf", 00:15:20.453 "is_configured": true, 00:15:20.453 "data_offset": 2048, 00:15:20.453 "data_size": 63488 00:15:20.453 }, 00:15:20.453 { 00:15:20.453 "name": "BaseBdev3", 00:15:20.453 "uuid": "a0620dce-3958-58a0-b44c-6be6c48203fe", 00:15:20.453 "is_configured": true, 00:15:20.453 "data_offset": 2048, 00:15:20.453 "data_size": 63488 00:15:20.453 }, 00:15:20.453 { 00:15:20.453 "name": "BaseBdev4", 00:15:20.453 "uuid": "c6118539-9caf-50ac-84c3-dcc8bf29351c", 00:15:20.453 "is_configured": true, 00:15:20.453 "data_offset": 2048, 00:15:20.453 "data_size": 63488 00:15:20.453 } 00:15:20.453 ] 00:15:20.453 }' 00:15:20.453 23:33:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:20.453 23:33:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:21.042 23:33:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:21.042 23:33:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:21.042 23:33:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:21.042 23:33:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:21.042 23:33:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:21.042 23:33:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:21.042 23:33:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:21.042 23:33:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:21.042 23:33:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:21.042 23:33:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:21.042 23:33:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:21.042 "name": "raid_bdev1", 00:15:21.042 "uuid": "4739503c-d95f-4bf8-90d1-59aa58bb25dc", 00:15:21.042 "strip_size_kb": 64, 00:15:21.042 "state": "online", 00:15:21.042 "raid_level": "raid5f", 00:15:21.042 "superblock": true, 00:15:21.042 "num_base_bdevs": 4, 00:15:21.042 "num_base_bdevs_discovered": 4, 00:15:21.042 "num_base_bdevs_operational": 4, 00:15:21.042 "base_bdevs_list": [ 00:15:21.042 { 00:15:21.042 "name": "spare", 00:15:21.042 "uuid": "c50969b1-f3a1-5c56-b65e-853dd5c24927", 00:15:21.042 "is_configured": true, 00:15:21.042 "data_offset": 2048, 00:15:21.042 "data_size": 63488 00:15:21.042 }, 00:15:21.042 { 00:15:21.042 "name": "BaseBdev2", 00:15:21.042 "uuid": "193e9a7b-9182-50e7-805f-c38604cdbcdf", 00:15:21.042 "is_configured": true, 00:15:21.042 "data_offset": 2048, 00:15:21.042 "data_size": 63488 00:15:21.042 }, 00:15:21.042 { 00:15:21.042 "name": "BaseBdev3", 00:15:21.042 "uuid": "a0620dce-3958-58a0-b44c-6be6c48203fe", 00:15:21.042 "is_configured": true, 00:15:21.042 "data_offset": 2048, 00:15:21.042 "data_size": 63488 00:15:21.042 }, 00:15:21.042 { 00:15:21.042 "name": "BaseBdev4", 00:15:21.042 "uuid": "c6118539-9caf-50ac-84c3-dcc8bf29351c", 00:15:21.042 "is_configured": true, 00:15:21.042 "data_offset": 2048, 00:15:21.042 "data_size": 63488 00:15:21.042 } 00:15:21.042 ] 00:15:21.042 }' 00:15:21.042 23:33:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:21.042 23:33:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:21.042 23:33:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:21.042 23:33:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:21.042 23:33:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:21.042 23:33:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:15:21.042 23:33:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:21.042 23:33:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:21.042 23:33:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:21.042 23:33:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:15:21.042 23:33:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:15:21.042 23:33:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:21.042 23:33:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:21.042 [2024-09-30 23:33:00.842344] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:21.042 23:33:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:21.042 23:33:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:21.042 23:33:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:21.042 23:33:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:21.042 23:33:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:21.042 23:33:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:21.042 23:33:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:21.042 23:33:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:21.042 23:33:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:21.042 23:33:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:21.042 23:33:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:21.042 23:33:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:21.042 23:33:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:21.042 23:33:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:21.042 23:33:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:21.042 23:33:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:21.315 23:33:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:21.315 "name": "raid_bdev1", 00:15:21.315 "uuid": "4739503c-d95f-4bf8-90d1-59aa58bb25dc", 00:15:21.315 "strip_size_kb": 64, 00:15:21.315 "state": "online", 00:15:21.315 "raid_level": "raid5f", 00:15:21.315 "superblock": true, 00:15:21.315 "num_base_bdevs": 4, 00:15:21.315 "num_base_bdevs_discovered": 3, 00:15:21.315 "num_base_bdevs_operational": 3, 00:15:21.315 "base_bdevs_list": [ 00:15:21.315 { 00:15:21.315 "name": null, 00:15:21.315 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:21.315 "is_configured": false, 00:15:21.315 "data_offset": 0, 00:15:21.315 "data_size": 63488 00:15:21.315 }, 00:15:21.315 { 00:15:21.315 "name": "BaseBdev2", 00:15:21.315 "uuid": "193e9a7b-9182-50e7-805f-c38604cdbcdf", 00:15:21.315 "is_configured": true, 00:15:21.315 "data_offset": 2048, 00:15:21.315 "data_size": 63488 00:15:21.315 }, 00:15:21.315 { 00:15:21.315 "name": "BaseBdev3", 00:15:21.315 "uuid": "a0620dce-3958-58a0-b44c-6be6c48203fe", 00:15:21.315 "is_configured": true, 00:15:21.315 "data_offset": 2048, 00:15:21.315 "data_size": 63488 00:15:21.315 }, 00:15:21.315 { 00:15:21.315 "name": "BaseBdev4", 00:15:21.315 "uuid": "c6118539-9caf-50ac-84c3-dcc8bf29351c", 00:15:21.315 "is_configured": true, 00:15:21.315 "data_offset": 2048, 00:15:21.315 "data_size": 63488 00:15:21.315 } 00:15:21.315 ] 00:15:21.315 }' 00:15:21.315 23:33:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:21.315 23:33:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:21.591 23:33:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:21.591 23:33:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:21.591 23:33:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:21.591 [2024-09-30 23:33:01.293666] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:21.591 [2024-09-30 23:33:01.293850] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:15:21.591 [2024-09-30 23:33:01.293881] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:15:21.591 [2024-09-30 23:33:01.293922] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:21.591 [2024-09-30 23:33:01.299483] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000049100 00:15:21.591 23:33:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:21.591 23:33:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:15:21.591 [2024-09-30 23:33:01.301830] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:22.530 23:33:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:22.530 23:33:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:22.530 23:33:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:22.530 23:33:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:22.530 23:33:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:22.530 23:33:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:22.530 23:33:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:22.530 23:33:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:22.530 23:33:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:22.530 23:33:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:22.530 23:33:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:22.530 "name": "raid_bdev1", 00:15:22.530 "uuid": "4739503c-d95f-4bf8-90d1-59aa58bb25dc", 00:15:22.530 "strip_size_kb": 64, 00:15:22.530 "state": "online", 00:15:22.530 "raid_level": "raid5f", 00:15:22.530 "superblock": true, 00:15:22.530 "num_base_bdevs": 4, 00:15:22.530 "num_base_bdevs_discovered": 4, 00:15:22.530 "num_base_bdevs_operational": 4, 00:15:22.530 "process": { 00:15:22.530 "type": "rebuild", 00:15:22.530 "target": "spare", 00:15:22.530 "progress": { 00:15:22.530 "blocks": 19200, 00:15:22.530 "percent": 10 00:15:22.530 } 00:15:22.530 }, 00:15:22.530 "base_bdevs_list": [ 00:15:22.530 { 00:15:22.530 "name": "spare", 00:15:22.530 "uuid": "c50969b1-f3a1-5c56-b65e-853dd5c24927", 00:15:22.530 "is_configured": true, 00:15:22.530 "data_offset": 2048, 00:15:22.530 "data_size": 63488 00:15:22.530 }, 00:15:22.530 { 00:15:22.530 "name": "BaseBdev2", 00:15:22.530 "uuid": "193e9a7b-9182-50e7-805f-c38604cdbcdf", 00:15:22.530 "is_configured": true, 00:15:22.530 "data_offset": 2048, 00:15:22.530 "data_size": 63488 00:15:22.530 }, 00:15:22.530 { 00:15:22.530 "name": "BaseBdev3", 00:15:22.530 "uuid": "a0620dce-3958-58a0-b44c-6be6c48203fe", 00:15:22.530 "is_configured": true, 00:15:22.530 "data_offset": 2048, 00:15:22.530 "data_size": 63488 00:15:22.530 }, 00:15:22.530 { 00:15:22.530 "name": "BaseBdev4", 00:15:22.530 "uuid": "c6118539-9caf-50ac-84c3-dcc8bf29351c", 00:15:22.530 "is_configured": true, 00:15:22.530 "data_offset": 2048, 00:15:22.530 "data_size": 63488 00:15:22.530 } 00:15:22.530 ] 00:15:22.530 }' 00:15:22.530 23:33:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:22.791 23:33:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:22.791 23:33:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:22.791 23:33:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:22.791 23:33:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:15:22.791 23:33:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:22.791 23:33:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:22.791 [2024-09-30 23:33:02.449193] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:22.791 [2024-09-30 23:33:02.508071] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:22.791 [2024-09-30 23:33:02.508121] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:22.791 [2024-09-30 23:33:02.508139] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:22.791 [2024-09-30 23:33:02.508146] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:22.791 23:33:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:22.791 23:33:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:22.791 23:33:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:22.791 23:33:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:22.791 23:33:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:22.791 23:33:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:22.791 23:33:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:22.791 23:33:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:22.791 23:33:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:22.791 23:33:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:22.791 23:33:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:22.791 23:33:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:22.791 23:33:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:22.791 23:33:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:22.791 23:33:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:22.791 23:33:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:22.791 23:33:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:22.791 "name": "raid_bdev1", 00:15:22.791 "uuid": "4739503c-d95f-4bf8-90d1-59aa58bb25dc", 00:15:22.791 "strip_size_kb": 64, 00:15:22.791 "state": "online", 00:15:22.791 "raid_level": "raid5f", 00:15:22.791 "superblock": true, 00:15:22.791 "num_base_bdevs": 4, 00:15:22.791 "num_base_bdevs_discovered": 3, 00:15:22.791 "num_base_bdevs_operational": 3, 00:15:22.791 "base_bdevs_list": [ 00:15:22.791 { 00:15:22.791 "name": null, 00:15:22.791 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:22.791 "is_configured": false, 00:15:22.791 "data_offset": 0, 00:15:22.791 "data_size": 63488 00:15:22.791 }, 00:15:22.791 { 00:15:22.791 "name": "BaseBdev2", 00:15:22.791 "uuid": "193e9a7b-9182-50e7-805f-c38604cdbcdf", 00:15:22.791 "is_configured": true, 00:15:22.791 "data_offset": 2048, 00:15:22.791 "data_size": 63488 00:15:22.791 }, 00:15:22.791 { 00:15:22.791 "name": "BaseBdev3", 00:15:22.791 "uuid": "a0620dce-3958-58a0-b44c-6be6c48203fe", 00:15:22.791 "is_configured": true, 00:15:22.791 "data_offset": 2048, 00:15:22.791 "data_size": 63488 00:15:22.791 }, 00:15:22.791 { 00:15:22.791 "name": "BaseBdev4", 00:15:22.791 "uuid": "c6118539-9caf-50ac-84c3-dcc8bf29351c", 00:15:22.791 "is_configured": true, 00:15:22.791 "data_offset": 2048, 00:15:22.791 "data_size": 63488 00:15:22.791 } 00:15:22.791 ] 00:15:22.791 }' 00:15:22.791 23:33:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:22.791 23:33:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:23.361 23:33:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:23.361 23:33:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:23.361 23:33:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:23.361 [2024-09-30 23:33:03.014977] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:23.361 [2024-09-30 23:33:03.015028] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:23.361 [2024-09-30 23:33:03.015056] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:15:23.361 [2024-09-30 23:33:03.015066] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:23.361 [2024-09-30 23:33:03.015550] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:23.361 [2024-09-30 23:33:03.015579] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:23.361 [2024-09-30 23:33:03.015664] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:15:23.361 [2024-09-30 23:33:03.015681] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:15:23.361 [2024-09-30 23:33:03.015697] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:15:23.361 [2024-09-30 23:33:03.015729] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:23.361 [2024-09-30 23:33:03.020126] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000491d0 00:15:23.361 spare 00:15:23.361 23:33:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:23.361 23:33:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:15:23.361 [2024-09-30 23:33:03.022541] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:24.306 23:33:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:24.306 23:33:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:24.306 23:33:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:24.306 23:33:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:24.306 23:33:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:24.306 23:33:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:24.306 23:33:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:24.306 23:33:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:24.306 23:33:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:24.306 23:33:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:24.306 23:33:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:24.306 "name": "raid_bdev1", 00:15:24.306 "uuid": "4739503c-d95f-4bf8-90d1-59aa58bb25dc", 00:15:24.306 "strip_size_kb": 64, 00:15:24.306 "state": "online", 00:15:24.306 "raid_level": "raid5f", 00:15:24.306 "superblock": true, 00:15:24.306 "num_base_bdevs": 4, 00:15:24.306 "num_base_bdevs_discovered": 4, 00:15:24.306 "num_base_bdevs_operational": 4, 00:15:24.306 "process": { 00:15:24.306 "type": "rebuild", 00:15:24.306 "target": "spare", 00:15:24.306 "progress": { 00:15:24.306 "blocks": 19200, 00:15:24.306 "percent": 10 00:15:24.306 } 00:15:24.306 }, 00:15:24.306 "base_bdevs_list": [ 00:15:24.306 { 00:15:24.306 "name": "spare", 00:15:24.306 "uuid": "c50969b1-f3a1-5c56-b65e-853dd5c24927", 00:15:24.306 "is_configured": true, 00:15:24.306 "data_offset": 2048, 00:15:24.306 "data_size": 63488 00:15:24.306 }, 00:15:24.306 { 00:15:24.306 "name": "BaseBdev2", 00:15:24.306 "uuid": "193e9a7b-9182-50e7-805f-c38604cdbcdf", 00:15:24.306 "is_configured": true, 00:15:24.306 "data_offset": 2048, 00:15:24.306 "data_size": 63488 00:15:24.306 }, 00:15:24.306 { 00:15:24.306 "name": "BaseBdev3", 00:15:24.306 "uuid": "a0620dce-3958-58a0-b44c-6be6c48203fe", 00:15:24.306 "is_configured": true, 00:15:24.306 "data_offset": 2048, 00:15:24.306 "data_size": 63488 00:15:24.306 }, 00:15:24.306 { 00:15:24.306 "name": "BaseBdev4", 00:15:24.306 "uuid": "c6118539-9caf-50ac-84c3-dcc8bf29351c", 00:15:24.306 "is_configured": true, 00:15:24.306 "data_offset": 2048, 00:15:24.306 "data_size": 63488 00:15:24.306 } 00:15:24.306 ] 00:15:24.306 }' 00:15:24.306 23:33:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:24.306 23:33:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:24.306 23:33:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:24.567 23:33:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:24.567 23:33:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:15:24.567 23:33:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:24.567 23:33:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:24.567 [2024-09-30 23:33:04.165867] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:24.567 [2024-09-30 23:33:04.228782] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:24.567 [2024-09-30 23:33:04.228893] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:24.567 [2024-09-30 23:33:04.228932] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:24.567 [2024-09-30 23:33:04.228955] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:24.567 23:33:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:24.567 23:33:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:24.567 23:33:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:24.567 23:33:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:24.567 23:33:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:24.567 23:33:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:24.567 23:33:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:24.567 23:33:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:24.567 23:33:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:24.567 23:33:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:24.567 23:33:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:24.567 23:33:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:24.567 23:33:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:24.567 23:33:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:24.567 23:33:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:24.567 23:33:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:24.567 23:33:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:24.567 "name": "raid_bdev1", 00:15:24.567 "uuid": "4739503c-d95f-4bf8-90d1-59aa58bb25dc", 00:15:24.567 "strip_size_kb": 64, 00:15:24.567 "state": "online", 00:15:24.567 "raid_level": "raid5f", 00:15:24.567 "superblock": true, 00:15:24.567 "num_base_bdevs": 4, 00:15:24.567 "num_base_bdevs_discovered": 3, 00:15:24.567 "num_base_bdevs_operational": 3, 00:15:24.567 "base_bdevs_list": [ 00:15:24.567 { 00:15:24.567 "name": null, 00:15:24.567 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:24.567 "is_configured": false, 00:15:24.567 "data_offset": 0, 00:15:24.567 "data_size": 63488 00:15:24.567 }, 00:15:24.567 { 00:15:24.567 "name": "BaseBdev2", 00:15:24.567 "uuid": "193e9a7b-9182-50e7-805f-c38604cdbcdf", 00:15:24.567 "is_configured": true, 00:15:24.567 "data_offset": 2048, 00:15:24.567 "data_size": 63488 00:15:24.567 }, 00:15:24.567 { 00:15:24.567 "name": "BaseBdev3", 00:15:24.567 "uuid": "a0620dce-3958-58a0-b44c-6be6c48203fe", 00:15:24.567 "is_configured": true, 00:15:24.567 "data_offset": 2048, 00:15:24.567 "data_size": 63488 00:15:24.567 }, 00:15:24.567 { 00:15:24.567 "name": "BaseBdev4", 00:15:24.567 "uuid": "c6118539-9caf-50ac-84c3-dcc8bf29351c", 00:15:24.567 "is_configured": true, 00:15:24.567 "data_offset": 2048, 00:15:24.567 "data_size": 63488 00:15:24.567 } 00:15:24.567 ] 00:15:24.567 }' 00:15:24.567 23:33:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:24.567 23:33:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:24.827 23:33:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:24.827 23:33:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:24.827 23:33:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:24.827 23:33:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:24.827 23:33:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:24.827 23:33:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:24.827 23:33:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:24.827 23:33:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:24.827 23:33:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:24.827 23:33:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:25.087 23:33:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:25.087 "name": "raid_bdev1", 00:15:25.087 "uuid": "4739503c-d95f-4bf8-90d1-59aa58bb25dc", 00:15:25.087 "strip_size_kb": 64, 00:15:25.087 "state": "online", 00:15:25.087 "raid_level": "raid5f", 00:15:25.087 "superblock": true, 00:15:25.087 "num_base_bdevs": 4, 00:15:25.087 "num_base_bdevs_discovered": 3, 00:15:25.087 "num_base_bdevs_operational": 3, 00:15:25.087 "base_bdevs_list": [ 00:15:25.087 { 00:15:25.087 "name": null, 00:15:25.087 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:25.087 "is_configured": false, 00:15:25.087 "data_offset": 0, 00:15:25.087 "data_size": 63488 00:15:25.087 }, 00:15:25.087 { 00:15:25.087 "name": "BaseBdev2", 00:15:25.087 "uuid": "193e9a7b-9182-50e7-805f-c38604cdbcdf", 00:15:25.087 "is_configured": true, 00:15:25.087 "data_offset": 2048, 00:15:25.087 "data_size": 63488 00:15:25.087 }, 00:15:25.087 { 00:15:25.087 "name": "BaseBdev3", 00:15:25.087 "uuid": "a0620dce-3958-58a0-b44c-6be6c48203fe", 00:15:25.087 "is_configured": true, 00:15:25.087 "data_offset": 2048, 00:15:25.087 "data_size": 63488 00:15:25.087 }, 00:15:25.087 { 00:15:25.087 "name": "BaseBdev4", 00:15:25.087 "uuid": "c6118539-9caf-50ac-84c3-dcc8bf29351c", 00:15:25.087 "is_configured": true, 00:15:25.087 "data_offset": 2048, 00:15:25.087 "data_size": 63488 00:15:25.087 } 00:15:25.087 ] 00:15:25.087 }' 00:15:25.087 23:33:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:25.087 23:33:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:25.087 23:33:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:25.087 23:33:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:25.088 23:33:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:15:25.088 23:33:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:25.088 23:33:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:25.088 23:33:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:25.088 23:33:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:15:25.088 23:33:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:25.088 23:33:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:25.088 [2024-09-30 23:33:04.819730] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:15:25.088 [2024-09-30 23:33:04.819823] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:25.088 [2024-09-30 23:33:04.819866] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c980 00:15:25.088 [2024-09-30 23:33:04.819898] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:25.088 [2024-09-30 23:33:04.820388] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:25.088 [2024-09-30 23:33:04.820458] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:25.088 [2024-09-30 23:33:04.820560] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:15:25.088 [2024-09-30 23:33:04.820614] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:15:25.088 [2024-09-30 23:33:04.820652] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:15:25.088 [2024-09-30 23:33:04.820695] bdev_raid.c:3884:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:15:25.088 BaseBdev1 00:15:25.088 23:33:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:25.088 23:33:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:15:26.026 23:33:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:26.026 23:33:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:26.026 23:33:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:26.026 23:33:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:26.026 23:33:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:26.026 23:33:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:26.026 23:33:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:26.026 23:33:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:26.026 23:33:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:26.026 23:33:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:26.026 23:33:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:26.026 23:33:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:26.026 23:33:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:26.026 23:33:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:26.026 23:33:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:26.286 23:33:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:26.286 "name": "raid_bdev1", 00:15:26.286 "uuid": "4739503c-d95f-4bf8-90d1-59aa58bb25dc", 00:15:26.286 "strip_size_kb": 64, 00:15:26.286 "state": "online", 00:15:26.286 "raid_level": "raid5f", 00:15:26.286 "superblock": true, 00:15:26.286 "num_base_bdevs": 4, 00:15:26.286 "num_base_bdevs_discovered": 3, 00:15:26.286 "num_base_bdevs_operational": 3, 00:15:26.286 "base_bdevs_list": [ 00:15:26.286 { 00:15:26.286 "name": null, 00:15:26.286 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:26.286 "is_configured": false, 00:15:26.286 "data_offset": 0, 00:15:26.286 "data_size": 63488 00:15:26.286 }, 00:15:26.286 { 00:15:26.286 "name": "BaseBdev2", 00:15:26.286 "uuid": "193e9a7b-9182-50e7-805f-c38604cdbcdf", 00:15:26.286 "is_configured": true, 00:15:26.286 "data_offset": 2048, 00:15:26.286 "data_size": 63488 00:15:26.286 }, 00:15:26.286 { 00:15:26.286 "name": "BaseBdev3", 00:15:26.286 "uuid": "a0620dce-3958-58a0-b44c-6be6c48203fe", 00:15:26.286 "is_configured": true, 00:15:26.286 "data_offset": 2048, 00:15:26.286 "data_size": 63488 00:15:26.286 }, 00:15:26.286 { 00:15:26.286 "name": "BaseBdev4", 00:15:26.286 "uuid": "c6118539-9caf-50ac-84c3-dcc8bf29351c", 00:15:26.286 "is_configured": true, 00:15:26.286 "data_offset": 2048, 00:15:26.286 "data_size": 63488 00:15:26.286 } 00:15:26.286 ] 00:15:26.286 }' 00:15:26.286 23:33:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:26.286 23:33:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:26.546 23:33:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:26.546 23:33:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:26.546 23:33:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:26.546 23:33:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:26.546 23:33:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:26.546 23:33:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:26.546 23:33:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:26.546 23:33:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:26.546 23:33:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:26.546 23:33:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:26.546 23:33:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:26.546 "name": "raid_bdev1", 00:15:26.546 "uuid": "4739503c-d95f-4bf8-90d1-59aa58bb25dc", 00:15:26.546 "strip_size_kb": 64, 00:15:26.546 "state": "online", 00:15:26.546 "raid_level": "raid5f", 00:15:26.546 "superblock": true, 00:15:26.546 "num_base_bdevs": 4, 00:15:26.546 "num_base_bdevs_discovered": 3, 00:15:26.546 "num_base_bdevs_operational": 3, 00:15:26.546 "base_bdevs_list": [ 00:15:26.546 { 00:15:26.546 "name": null, 00:15:26.546 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:26.546 "is_configured": false, 00:15:26.546 "data_offset": 0, 00:15:26.546 "data_size": 63488 00:15:26.546 }, 00:15:26.546 { 00:15:26.546 "name": "BaseBdev2", 00:15:26.546 "uuid": "193e9a7b-9182-50e7-805f-c38604cdbcdf", 00:15:26.546 "is_configured": true, 00:15:26.546 "data_offset": 2048, 00:15:26.546 "data_size": 63488 00:15:26.546 }, 00:15:26.546 { 00:15:26.546 "name": "BaseBdev3", 00:15:26.546 "uuid": "a0620dce-3958-58a0-b44c-6be6c48203fe", 00:15:26.546 "is_configured": true, 00:15:26.546 "data_offset": 2048, 00:15:26.546 "data_size": 63488 00:15:26.546 }, 00:15:26.546 { 00:15:26.546 "name": "BaseBdev4", 00:15:26.546 "uuid": "c6118539-9caf-50ac-84c3-dcc8bf29351c", 00:15:26.546 "is_configured": true, 00:15:26.546 "data_offset": 2048, 00:15:26.546 "data_size": 63488 00:15:26.546 } 00:15:26.546 ] 00:15:26.546 }' 00:15:26.546 23:33:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:26.806 23:33:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:26.806 23:33:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:26.806 23:33:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:26.806 23:33:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:26.806 23:33:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@650 -- # local es=0 00:15:26.806 23:33:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:26.806 23:33:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:15:26.806 23:33:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:26.806 23:33:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:15:26.806 23:33:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:26.806 23:33:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:26.806 23:33:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:26.806 23:33:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:26.806 [2024-09-30 23:33:06.469066] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:26.806 [2024-09-30 23:33:06.469225] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:15:26.806 [2024-09-30 23:33:06.469278] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:15:26.806 request: 00:15:26.806 { 00:15:26.806 "base_bdev": "BaseBdev1", 00:15:26.806 "raid_bdev": "raid_bdev1", 00:15:26.806 "method": "bdev_raid_add_base_bdev", 00:15:26.806 "req_id": 1 00:15:26.806 } 00:15:26.806 Got JSON-RPC error response 00:15:26.806 response: 00:15:26.806 { 00:15:26.806 "code": -22, 00:15:26.806 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:15:26.806 } 00:15:26.806 23:33:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:15:26.806 23:33:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@653 -- # es=1 00:15:26.806 23:33:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:26.806 23:33:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:26.806 23:33:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:26.806 23:33:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:15:27.746 23:33:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:27.746 23:33:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:27.746 23:33:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:27.746 23:33:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:27.746 23:33:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:27.746 23:33:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:27.746 23:33:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:27.746 23:33:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:27.746 23:33:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:27.746 23:33:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:27.746 23:33:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:27.746 23:33:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:27.746 23:33:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:27.746 23:33:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:27.746 23:33:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:27.746 23:33:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:27.746 "name": "raid_bdev1", 00:15:27.746 "uuid": "4739503c-d95f-4bf8-90d1-59aa58bb25dc", 00:15:27.746 "strip_size_kb": 64, 00:15:27.746 "state": "online", 00:15:27.746 "raid_level": "raid5f", 00:15:27.746 "superblock": true, 00:15:27.746 "num_base_bdevs": 4, 00:15:27.746 "num_base_bdevs_discovered": 3, 00:15:27.746 "num_base_bdevs_operational": 3, 00:15:27.746 "base_bdevs_list": [ 00:15:27.746 { 00:15:27.746 "name": null, 00:15:27.746 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:27.746 "is_configured": false, 00:15:27.746 "data_offset": 0, 00:15:27.746 "data_size": 63488 00:15:27.746 }, 00:15:27.746 { 00:15:27.746 "name": "BaseBdev2", 00:15:27.746 "uuid": "193e9a7b-9182-50e7-805f-c38604cdbcdf", 00:15:27.746 "is_configured": true, 00:15:27.746 "data_offset": 2048, 00:15:27.746 "data_size": 63488 00:15:27.746 }, 00:15:27.746 { 00:15:27.746 "name": "BaseBdev3", 00:15:27.746 "uuid": "a0620dce-3958-58a0-b44c-6be6c48203fe", 00:15:27.746 "is_configured": true, 00:15:27.746 "data_offset": 2048, 00:15:27.747 "data_size": 63488 00:15:27.747 }, 00:15:27.747 { 00:15:27.747 "name": "BaseBdev4", 00:15:27.747 "uuid": "c6118539-9caf-50ac-84c3-dcc8bf29351c", 00:15:27.747 "is_configured": true, 00:15:27.747 "data_offset": 2048, 00:15:27.747 "data_size": 63488 00:15:27.747 } 00:15:27.747 ] 00:15:27.747 }' 00:15:27.747 23:33:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:27.747 23:33:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:28.316 23:33:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:28.316 23:33:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:28.316 23:33:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:28.316 23:33:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:28.316 23:33:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:28.316 23:33:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:28.316 23:33:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:28.316 23:33:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.316 23:33:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:28.316 23:33:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.316 23:33:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:28.316 "name": "raid_bdev1", 00:15:28.316 "uuid": "4739503c-d95f-4bf8-90d1-59aa58bb25dc", 00:15:28.316 "strip_size_kb": 64, 00:15:28.316 "state": "online", 00:15:28.316 "raid_level": "raid5f", 00:15:28.316 "superblock": true, 00:15:28.316 "num_base_bdevs": 4, 00:15:28.316 "num_base_bdevs_discovered": 3, 00:15:28.316 "num_base_bdevs_operational": 3, 00:15:28.316 "base_bdevs_list": [ 00:15:28.316 { 00:15:28.316 "name": null, 00:15:28.316 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:28.316 "is_configured": false, 00:15:28.316 "data_offset": 0, 00:15:28.316 "data_size": 63488 00:15:28.316 }, 00:15:28.316 { 00:15:28.316 "name": "BaseBdev2", 00:15:28.316 "uuid": "193e9a7b-9182-50e7-805f-c38604cdbcdf", 00:15:28.316 "is_configured": true, 00:15:28.316 "data_offset": 2048, 00:15:28.316 "data_size": 63488 00:15:28.316 }, 00:15:28.316 { 00:15:28.316 "name": "BaseBdev3", 00:15:28.316 "uuid": "a0620dce-3958-58a0-b44c-6be6c48203fe", 00:15:28.316 "is_configured": true, 00:15:28.316 "data_offset": 2048, 00:15:28.316 "data_size": 63488 00:15:28.316 }, 00:15:28.316 { 00:15:28.316 "name": "BaseBdev4", 00:15:28.316 "uuid": "c6118539-9caf-50ac-84c3-dcc8bf29351c", 00:15:28.316 "is_configured": true, 00:15:28.316 "data_offset": 2048, 00:15:28.316 "data_size": 63488 00:15:28.316 } 00:15:28.316 ] 00:15:28.316 }' 00:15:28.316 23:33:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:28.316 23:33:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:28.316 23:33:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:28.316 23:33:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:28.316 23:33:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 95564 00:15:28.316 23:33:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@950 -- # '[' -z 95564 ']' 00:15:28.316 23:33:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@954 -- # kill -0 95564 00:15:28.316 23:33:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@955 -- # uname 00:15:28.316 23:33:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:28.316 23:33:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 95564 00:15:28.316 23:33:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:28.316 23:33:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:28.316 23:33:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 95564' 00:15:28.316 killing process with pid 95564 00:15:28.316 Received shutdown signal, test time was about 60.000000 seconds 00:15:28.316 00:15:28.316 Latency(us) 00:15:28.316 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:28.316 =================================================================================================================== 00:15:28.316 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:15:28.316 23:33:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@969 -- # kill 95564 00:15:28.316 [2024-09-30 23:33:08.103157] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:28.316 [2024-09-30 23:33:08.103257] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:28.317 23:33:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@974 -- # wait 95564 00:15:28.317 [2024-09-30 23:33:08.103332] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:28.317 [2024-09-30 23:33:08.103343] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state offline 00:15:28.575 [2024-09-30 23:33:08.195427] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:28.836 23:33:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:15:28.836 00:15:28.836 real 0m25.841s 00:15:28.836 user 0m32.569s 00:15:28.836 sys 0m3.518s 00:15:28.836 23:33:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:28.836 ************************************ 00:15:28.836 END TEST raid5f_rebuild_test_sb 00:15:28.836 ************************************ 00:15:28.836 23:33:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:28.836 23:33:08 bdev_raid -- bdev/bdev_raid.sh@995 -- # base_blocklen=4096 00:15:28.836 23:33:08 bdev_raid -- bdev/bdev_raid.sh@997 -- # run_test raid_state_function_test_sb_4k raid_state_function_test raid1 2 true 00:15:28.836 23:33:08 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:15:28.836 23:33:08 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:28.836 23:33:08 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:28.836 ************************************ 00:15:28.836 START TEST raid_state_function_test_sb_4k 00:15:28.836 ************************************ 00:15:28.836 23:33:08 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 2 true 00:15:28.836 23:33:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:15:28.836 23:33:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:15:28.836 23:33:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:15:28.836 23:33:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:15:28.836 23:33:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:15:28.836 23:33:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:28.836 23:33:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:15:28.836 23:33:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:28.836 23:33:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:28.836 23:33:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:15:28.836 23:33:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:28.836 23:33:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:28.836 23:33:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:15:28.836 23:33:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:15:28.836 23:33:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:15:28.836 23:33:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # local strip_size 00:15:28.836 23:33:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:15:28.836 23:33:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:15:28.836 23:33:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:15:28.836 23:33:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:15:28.836 23:33:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:15:28.836 23:33:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:15:28.836 23:33:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@229 -- # raid_pid=96362 00:15:28.836 23:33:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:15:28.836 23:33:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 96362' 00:15:28.836 Process raid pid: 96362 00:15:28.836 23:33:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@231 -- # waitforlisten 96362 00:15:28.836 23:33:08 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@831 -- # '[' -z 96362 ']' 00:15:28.836 23:33:08 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:28.836 23:33:08 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:28.836 23:33:08 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:28.836 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:28.836 23:33:08 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:28.836 23:33:08 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:29.096 [2024-09-30 23:33:08.734075] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:15:29.096 [2024-09-30 23:33:08.734319] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:29.096 [2024-09-30 23:33:08.898834] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:29.357 [2024-09-30 23:33:08.972391] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:15:29.357 [2024-09-30 23:33:09.048313] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:29.357 [2024-09-30 23:33:09.048424] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:29.925 23:33:09 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:29.925 23:33:09 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@864 -- # return 0 00:15:29.925 23:33:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:15:29.925 23:33:09 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.925 23:33:09 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:29.925 [2024-09-30 23:33:09.555721] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:29.925 [2024-09-30 23:33:09.555852] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:29.925 [2024-09-30 23:33:09.555905] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:29.925 [2024-09-30 23:33:09.555929] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:29.925 23:33:09 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.925 23:33:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:15:29.925 23:33:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:29.925 23:33:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:29.925 23:33:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:29.925 23:33:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:29.925 23:33:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:29.925 23:33:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:29.925 23:33:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:29.925 23:33:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:29.925 23:33:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:29.925 23:33:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:29.925 23:33:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:29.925 23:33:09 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.925 23:33:09 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:29.925 23:33:09 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.925 23:33:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:29.925 "name": "Existed_Raid", 00:15:29.925 "uuid": "38b76a68-cf2f-4cb6-8479-d61d3aeacaf5", 00:15:29.925 "strip_size_kb": 0, 00:15:29.925 "state": "configuring", 00:15:29.925 "raid_level": "raid1", 00:15:29.925 "superblock": true, 00:15:29.925 "num_base_bdevs": 2, 00:15:29.925 "num_base_bdevs_discovered": 0, 00:15:29.925 "num_base_bdevs_operational": 2, 00:15:29.925 "base_bdevs_list": [ 00:15:29.925 { 00:15:29.925 "name": "BaseBdev1", 00:15:29.925 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:29.925 "is_configured": false, 00:15:29.925 "data_offset": 0, 00:15:29.925 "data_size": 0 00:15:29.925 }, 00:15:29.925 { 00:15:29.925 "name": "BaseBdev2", 00:15:29.925 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:29.925 "is_configured": false, 00:15:29.925 "data_offset": 0, 00:15:29.925 "data_size": 0 00:15:29.925 } 00:15:29.925 ] 00:15:29.925 }' 00:15:29.925 23:33:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:29.925 23:33:09 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:30.184 23:33:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:30.185 23:33:09 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:30.185 23:33:09 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:30.185 [2024-09-30 23:33:09.970992] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:30.185 [2024-09-30 23:33:09.971084] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring 00:15:30.185 23:33:09 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:30.185 23:33:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:15:30.185 23:33:09 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:30.185 23:33:09 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:30.185 [2024-09-30 23:33:09.983013] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:30.185 [2024-09-30 23:33:09.983091] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:30.185 [2024-09-30 23:33:09.983115] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:30.185 [2024-09-30 23:33:09.983136] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:30.185 23:33:09 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:30.185 23:33:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev1 00:15:30.185 23:33:09 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:30.185 23:33:09 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:30.185 [2024-09-30 23:33:10.009833] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:30.185 BaseBdev1 00:15:30.185 23:33:10 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:30.185 23:33:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:15:30.185 23:33:10 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:15:30.185 23:33:10 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:15:30.185 23:33:10 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@901 -- # local i 00:15:30.185 23:33:10 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:15:30.185 23:33:10 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:15:30.185 23:33:10 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:15:30.185 23:33:10 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:30.185 23:33:10 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:30.185 23:33:10 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:30.185 23:33:10 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:30.185 23:33:10 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:30.185 23:33:10 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:30.185 [ 00:15:30.185 { 00:15:30.185 "name": "BaseBdev1", 00:15:30.185 "aliases": [ 00:15:30.185 "bc8de596-3a27-4502-83b9-976a3d4ba69a" 00:15:30.185 ], 00:15:30.185 "product_name": "Malloc disk", 00:15:30.185 "block_size": 4096, 00:15:30.185 "num_blocks": 8192, 00:15:30.185 "uuid": "bc8de596-3a27-4502-83b9-976a3d4ba69a", 00:15:30.185 "assigned_rate_limits": { 00:15:30.443 "rw_ios_per_sec": 0, 00:15:30.443 "rw_mbytes_per_sec": 0, 00:15:30.443 "r_mbytes_per_sec": 0, 00:15:30.443 "w_mbytes_per_sec": 0 00:15:30.443 }, 00:15:30.443 "claimed": true, 00:15:30.443 "claim_type": "exclusive_write", 00:15:30.443 "zoned": false, 00:15:30.443 "supported_io_types": { 00:15:30.443 "read": true, 00:15:30.443 "write": true, 00:15:30.443 "unmap": true, 00:15:30.443 "flush": true, 00:15:30.443 "reset": true, 00:15:30.443 "nvme_admin": false, 00:15:30.443 "nvme_io": false, 00:15:30.443 "nvme_io_md": false, 00:15:30.443 "write_zeroes": true, 00:15:30.443 "zcopy": true, 00:15:30.443 "get_zone_info": false, 00:15:30.443 "zone_management": false, 00:15:30.443 "zone_append": false, 00:15:30.443 "compare": false, 00:15:30.443 "compare_and_write": false, 00:15:30.443 "abort": true, 00:15:30.443 "seek_hole": false, 00:15:30.443 "seek_data": false, 00:15:30.443 "copy": true, 00:15:30.443 "nvme_iov_md": false 00:15:30.443 }, 00:15:30.443 "memory_domains": [ 00:15:30.443 { 00:15:30.443 "dma_device_id": "system", 00:15:30.443 "dma_device_type": 1 00:15:30.443 }, 00:15:30.443 { 00:15:30.443 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:30.443 "dma_device_type": 2 00:15:30.443 } 00:15:30.443 ], 00:15:30.443 "driver_specific": {} 00:15:30.443 } 00:15:30.443 ] 00:15:30.443 23:33:10 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:30.443 23:33:10 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@907 -- # return 0 00:15:30.443 23:33:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:15:30.443 23:33:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:30.443 23:33:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:30.443 23:33:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:30.443 23:33:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:30.443 23:33:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:30.443 23:33:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:30.443 23:33:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:30.443 23:33:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:30.443 23:33:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:30.443 23:33:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:30.443 23:33:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:30.443 23:33:10 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:30.443 23:33:10 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:30.443 23:33:10 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:30.443 23:33:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:30.443 "name": "Existed_Raid", 00:15:30.443 "uuid": "a8fe2066-3ef4-4726-821c-c943a605d1aa", 00:15:30.443 "strip_size_kb": 0, 00:15:30.443 "state": "configuring", 00:15:30.443 "raid_level": "raid1", 00:15:30.443 "superblock": true, 00:15:30.443 "num_base_bdevs": 2, 00:15:30.443 "num_base_bdevs_discovered": 1, 00:15:30.443 "num_base_bdevs_operational": 2, 00:15:30.443 "base_bdevs_list": [ 00:15:30.443 { 00:15:30.443 "name": "BaseBdev1", 00:15:30.443 "uuid": "bc8de596-3a27-4502-83b9-976a3d4ba69a", 00:15:30.443 "is_configured": true, 00:15:30.443 "data_offset": 256, 00:15:30.443 "data_size": 7936 00:15:30.443 }, 00:15:30.443 { 00:15:30.443 "name": "BaseBdev2", 00:15:30.443 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:30.443 "is_configured": false, 00:15:30.443 "data_offset": 0, 00:15:30.443 "data_size": 0 00:15:30.443 } 00:15:30.443 ] 00:15:30.443 }' 00:15:30.443 23:33:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:30.443 23:33:10 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:30.703 23:33:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:30.703 23:33:10 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:30.703 23:33:10 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:30.703 [2024-09-30 23:33:10.524951] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:30.703 [2024-09-30 23:33:10.525036] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring 00:15:30.703 23:33:10 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:30.703 23:33:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:15:30.703 23:33:10 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:30.703 23:33:10 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:30.703 [2024-09-30 23:33:10.536980] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:30.703 [2024-09-30 23:33:10.538994] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:30.703 [2024-09-30 23:33:10.539072] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:30.703 23:33:10 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:30.703 23:33:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:15:30.703 23:33:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:30.703 23:33:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:15:30.703 23:33:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:30.703 23:33:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:30.703 23:33:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:30.703 23:33:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:30.703 23:33:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:30.703 23:33:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:30.703 23:33:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:30.703 23:33:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:30.703 23:33:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:30.703 23:33:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:30.703 23:33:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:30.703 23:33:10 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:30.703 23:33:10 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:30.963 23:33:10 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:30.963 23:33:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:30.963 "name": "Existed_Raid", 00:15:30.963 "uuid": "a687c500-36e1-438f-91b8-ddbbc4dfc6b3", 00:15:30.963 "strip_size_kb": 0, 00:15:30.963 "state": "configuring", 00:15:30.963 "raid_level": "raid1", 00:15:30.963 "superblock": true, 00:15:30.963 "num_base_bdevs": 2, 00:15:30.963 "num_base_bdevs_discovered": 1, 00:15:30.963 "num_base_bdevs_operational": 2, 00:15:30.963 "base_bdevs_list": [ 00:15:30.963 { 00:15:30.963 "name": "BaseBdev1", 00:15:30.963 "uuid": "bc8de596-3a27-4502-83b9-976a3d4ba69a", 00:15:30.963 "is_configured": true, 00:15:30.963 "data_offset": 256, 00:15:30.963 "data_size": 7936 00:15:30.963 }, 00:15:30.963 { 00:15:30.963 "name": "BaseBdev2", 00:15:30.963 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:30.963 "is_configured": false, 00:15:30.963 "data_offset": 0, 00:15:30.963 "data_size": 0 00:15:30.963 } 00:15:30.963 ] 00:15:30.963 }' 00:15:30.963 23:33:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:30.963 23:33:10 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:31.223 23:33:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev2 00:15:31.223 23:33:10 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:31.223 23:33:10 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:31.223 [2024-09-30 23:33:10.961245] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:31.223 [2024-09-30 23:33:10.962027] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:15:31.223 [2024-09-30 23:33:10.962201] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:15:31.223 BaseBdev2 00:15:31.223 23:33:10 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:31.223 [2024-09-30 23:33:10.963309] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:15:31.223 23:33:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:15:31.223 [2024-09-30 23:33:10.963813] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:15:31.223 [2024-09-30 23:33:10.963942] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980 00:15:31.223 23:33:10 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:15:31.223 [2024-09-30 23:33:10.964407] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:31.223 23:33:10 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:15:31.223 23:33:10 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@901 -- # local i 00:15:31.223 23:33:10 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:15:31.223 23:33:10 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:15:31.223 23:33:10 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:15:31.223 23:33:10 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:31.223 23:33:10 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:31.223 23:33:10 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:31.223 23:33:10 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:31.223 23:33:10 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:31.223 23:33:10 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:31.223 [ 00:15:31.223 { 00:15:31.223 "name": "BaseBdev2", 00:15:31.223 "aliases": [ 00:15:31.223 "07ce0170-7f3e-4f9f-bf87-ea412250692e" 00:15:31.223 ], 00:15:31.223 "product_name": "Malloc disk", 00:15:31.223 "block_size": 4096, 00:15:31.223 "num_blocks": 8192, 00:15:31.223 "uuid": "07ce0170-7f3e-4f9f-bf87-ea412250692e", 00:15:31.223 "assigned_rate_limits": { 00:15:31.223 "rw_ios_per_sec": 0, 00:15:31.223 "rw_mbytes_per_sec": 0, 00:15:31.223 "r_mbytes_per_sec": 0, 00:15:31.223 "w_mbytes_per_sec": 0 00:15:31.223 }, 00:15:31.223 "claimed": true, 00:15:31.223 "claim_type": "exclusive_write", 00:15:31.223 "zoned": false, 00:15:31.223 "supported_io_types": { 00:15:31.223 "read": true, 00:15:31.223 "write": true, 00:15:31.223 "unmap": true, 00:15:31.223 "flush": true, 00:15:31.223 "reset": true, 00:15:31.223 "nvme_admin": false, 00:15:31.223 "nvme_io": false, 00:15:31.223 "nvme_io_md": false, 00:15:31.223 "write_zeroes": true, 00:15:31.223 "zcopy": true, 00:15:31.223 "get_zone_info": false, 00:15:31.223 "zone_management": false, 00:15:31.223 "zone_append": false, 00:15:31.223 "compare": false, 00:15:31.223 "compare_and_write": false, 00:15:31.223 "abort": true, 00:15:31.223 "seek_hole": false, 00:15:31.223 "seek_data": false, 00:15:31.223 "copy": true, 00:15:31.223 "nvme_iov_md": false 00:15:31.223 }, 00:15:31.223 "memory_domains": [ 00:15:31.223 { 00:15:31.223 "dma_device_id": "system", 00:15:31.223 "dma_device_type": 1 00:15:31.223 }, 00:15:31.223 { 00:15:31.223 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:31.223 "dma_device_type": 2 00:15:31.223 } 00:15:31.223 ], 00:15:31.223 "driver_specific": {} 00:15:31.223 } 00:15:31.223 ] 00:15:31.223 23:33:11 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:31.223 23:33:11 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@907 -- # return 0 00:15:31.223 23:33:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:31.223 23:33:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:31.223 23:33:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:15:31.223 23:33:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:31.223 23:33:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:31.223 23:33:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:31.223 23:33:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:31.223 23:33:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:31.223 23:33:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:31.223 23:33:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:31.223 23:33:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:31.223 23:33:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:31.223 23:33:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:31.223 23:33:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:31.223 23:33:11 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:31.223 23:33:11 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:31.223 23:33:11 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:31.223 23:33:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:31.223 "name": "Existed_Raid", 00:15:31.223 "uuid": "a687c500-36e1-438f-91b8-ddbbc4dfc6b3", 00:15:31.223 "strip_size_kb": 0, 00:15:31.223 "state": "online", 00:15:31.223 "raid_level": "raid1", 00:15:31.223 "superblock": true, 00:15:31.223 "num_base_bdevs": 2, 00:15:31.223 "num_base_bdevs_discovered": 2, 00:15:31.223 "num_base_bdevs_operational": 2, 00:15:31.223 "base_bdevs_list": [ 00:15:31.223 { 00:15:31.223 "name": "BaseBdev1", 00:15:31.223 "uuid": "bc8de596-3a27-4502-83b9-976a3d4ba69a", 00:15:31.223 "is_configured": true, 00:15:31.223 "data_offset": 256, 00:15:31.223 "data_size": 7936 00:15:31.223 }, 00:15:31.223 { 00:15:31.223 "name": "BaseBdev2", 00:15:31.223 "uuid": "07ce0170-7f3e-4f9f-bf87-ea412250692e", 00:15:31.223 "is_configured": true, 00:15:31.223 "data_offset": 256, 00:15:31.223 "data_size": 7936 00:15:31.223 } 00:15:31.223 ] 00:15:31.223 }' 00:15:31.223 23:33:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:31.223 23:33:11 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:31.794 23:33:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:15:31.794 23:33:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:15:31.794 23:33:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:31.794 23:33:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:31.794 23:33:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local name 00:15:31.794 23:33:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:31.794 23:33:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:15:31.794 23:33:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:31.794 23:33:11 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:31.794 23:33:11 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:31.794 [2024-09-30 23:33:11.452636] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:31.794 23:33:11 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:31.794 23:33:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:31.794 "name": "Existed_Raid", 00:15:31.794 "aliases": [ 00:15:31.794 "a687c500-36e1-438f-91b8-ddbbc4dfc6b3" 00:15:31.794 ], 00:15:31.794 "product_name": "Raid Volume", 00:15:31.794 "block_size": 4096, 00:15:31.794 "num_blocks": 7936, 00:15:31.794 "uuid": "a687c500-36e1-438f-91b8-ddbbc4dfc6b3", 00:15:31.794 "assigned_rate_limits": { 00:15:31.794 "rw_ios_per_sec": 0, 00:15:31.794 "rw_mbytes_per_sec": 0, 00:15:31.794 "r_mbytes_per_sec": 0, 00:15:31.794 "w_mbytes_per_sec": 0 00:15:31.794 }, 00:15:31.794 "claimed": false, 00:15:31.794 "zoned": false, 00:15:31.794 "supported_io_types": { 00:15:31.794 "read": true, 00:15:31.794 "write": true, 00:15:31.794 "unmap": false, 00:15:31.794 "flush": false, 00:15:31.794 "reset": true, 00:15:31.794 "nvme_admin": false, 00:15:31.794 "nvme_io": false, 00:15:31.794 "nvme_io_md": false, 00:15:31.794 "write_zeroes": true, 00:15:31.794 "zcopy": false, 00:15:31.794 "get_zone_info": false, 00:15:31.794 "zone_management": false, 00:15:31.794 "zone_append": false, 00:15:31.794 "compare": false, 00:15:31.794 "compare_and_write": false, 00:15:31.794 "abort": false, 00:15:31.794 "seek_hole": false, 00:15:31.794 "seek_data": false, 00:15:31.794 "copy": false, 00:15:31.794 "nvme_iov_md": false 00:15:31.794 }, 00:15:31.794 "memory_domains": [ 00:15:31.794 { 00:15:31.794 "dma_device_id": "system", 00:15:31.794 "dma_device_type": 1 00:15:31.794 }, 00:15:31.794 { 00:15:31.794 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:31.794 "dma_device_type": 2 00:15:31.794 }, 00:15:31.794 { 00:15:31.794 "dma_device_id": "system", 00:15:31.794 "dma_device_type": 1 00:15:31.794 }, 00:15:31.794 { 00:15:31.794 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:31.794 "dma_device_type": 2 00:15:31.794 } 00:15:31.794 ], 00:15:31.794 "driver_specific": { 00:15:31.794 "raid": { 00:15:31.794 "uuid": "a687c500-36e1-438f-91b8-ddbbc4dfc6b3", 00:15:31.794 "strip_size_kb": 0, 00:15:31.794 "state": "online", 00:15:31.794 "raid_level": "raid1", 00:15:31.794 "superblock": true, 00:15:31.794 "num_base_bdevs": 2, 00:15:31.794 "num_base_bdevs_discovered": 2, 00:15:31.794 "num_base_bdevs_operational": 2, 00:15:31.794 "base_bdevs_list": [ 00:15:31.794 { 00:15:31.794 "name": "BaseBdev1", 00:15:31.794 "uuid": "bc8de596-3a27-4502-83b9-976a3d4ba69a", 00:15:31.794 "is_configured": true, 00:15:31.794 "data_offset": 256, 00:15:31.794 "data_size": 7936 00:15:31.794 }, 00:15:31.794 { 00:15:31.794 "name": "BaseBdev2", 00:15:31.794 "uuid": "07ce0170-7f3e-4f9f-bf87-ea412250692e", 00:15:31.794 "is_configured": true, 00:15:31.794 "data_offset": 256, 00:15:31.794 "data_size": 7936 00:15:31.794 } 00:15:31.794 ] 00:15:31.794 } 00:15:31.794 } 00:15:31.794 }' 00:15:31.794 23:33:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:31.794 23:33:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:15:31.794 BaseBdev2' 00:15:31.794 23:33:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:31.794 23:33:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:15:31.794 23:33:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:31.794 23:33:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:15:31.794 23:33:11 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:31.794 23:33:11 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:31.794 23:33:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:31.794 23:33:11 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:31.794 23:33:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:15:31.794 23:33:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:15:31.794 23:33:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:31.794 23:33:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:15:31.795 23:33:11 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:31.795 23:33:11 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:31.795 23:33:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:31.795 23:33:11 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:32.055 23:33:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:15:32.056 23:33:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:15:32.056 23:33:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:15:32.056 23:33:11 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:32.056 23:33:11 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:32.056 [2024-09-30 23:33:11.672041] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:32.056 23:33:11 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:32.056 23:33:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@260 -- # local expected_state 00:15:32.056 23:33:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:15:32.056 23:33:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@198 -- # case $1 in 00:15:32.056 23:33:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@199 -- # return 0 00:15:32.056 23:33:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:15:32.056 23:33:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:15:32.056 23:33:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:32.056 23:33:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:32.056 23:33:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:32.056 23:33:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:32.056 23:33:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:32.056 23:33:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:32.056 23:33:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:32.056 23:33:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:32.056 23:33:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:32.056 23:33:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:32.056 23:33:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:32.056 23:33:11 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:32.056 23:33:11 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:32.056 23:33:11 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:32.056 23:33:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:32.056 "name": "Existed_Raid", 00:15:32.056 "uuid": "a687c500-36e1-438f-91b8-ddbbc4dfc6b3", 00:15:32.056 "strip_size_kb": 0, 00:15:32.056 "state": "online", 00:15:32.056 "raid_level": "raid1", 00:15:32.056 "superblock": true, 00:15:32.056 "num_base_bdevs": 2, 00:15:32.056 "num_base_bdevs_discovered": 1, 00:15:32.056 "num_base_bdevs_operational": 1, 00:15:32.056 "base_bdevs_list": [ 00:15:32.056 { 00:15:32.056 "name": null, 00:15:32.056 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:32.056 "is_configured": false, 00:15:32.056 "data_offset": 0, 00:15:32.056 "data_size": 7936 00:15:32.056 }, 00:15:32.056 { 00:15:32.056 "name": "BaseBdev2", 00:15:32.056 "uuid": "07ce0170-7f3e-4f9f-bf87-ea412250692e", 00:15:32.056 "is_configured": true, 00:15:32.056 "data_offset": 256, 00:15:32.056 "data_size": 7936 00:15:32.056 } 00:15:32.056 ] 00:15:32.056 }' 00:15:32.056 23:33:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:32.056 23:33:11 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:32.316 23:33:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:15:32.316 23:33:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:32.316 23:33:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:32.316 23:33:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:32.316 23:33:12 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:32.316 23:33:12 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:32.316 23:33:12 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:32.576 23:33:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:32.576 23:33:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:32.576 23:33:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:15:32.576 23:33:12 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:32.576 23:33:12 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:32.576 [2024-09-30 23:33:12.191638] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:32.576 [2024-09-30 23:33:12.191799] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:32.576 [2024-09-30 23:33:12.212631] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:32.576 [2024-09-30 23:33:12.212749] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:32.576 [2024-09-30 23:33:12.212790] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline 00:15:32.576 23:33:12 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:32.576 23:33:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:32.576 23:33:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:32.576 23:33:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:32.576 23:33:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:15:32.576 23:33:12 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:32.576 23:33:12 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:32.576 23:33:12 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:32.576 23:33:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:15:32.576 23:33:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:15:32.576 23:33:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:15:32.576 23:33:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@326 -- # killprocess 96362 00:15:32.576 23:33:12 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@950 -- # '[' -z 96362 ']' 00:15:32.576 23:33:12 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@954 -- # kill -0 96362 00:15:32.576 23:33:12 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@955 -- # uname 00:15:32.576 23:33:12 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:32.576 23:33:12 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 96362 00:15:32.576 23:33:12 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:32.576 killing process with pid 96362 00:15:32.576 23:33:12 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:32.576 23:33:12 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@968 -- # echo 'killing process with pid 96362' 00:15:32.576 23:33:12 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@969 -- # kill 96362 00:15:32.576 [2024-09-30 23:33:12.335635] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:32.576 23:33:12 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@974 -- # wait 96362 00:15:32.576 [2024-09-30 23:33:12.337182] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:33.147 ************************************ 00:15:33.147 END TEST raid_state_function_test_sb_4k 00:15:33.147 ************************************ 00:15:33.147 23:33:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@328 -- # return 0 00:15:33.147 00:15:33.147 real 0m4.077s 00:15:33.147 user 0m6.186s 00:15:33.147 sys 0m0.913s 00:15:33.147 23:33:12 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:33.147 23:33:12 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:33.147 23:33:12 bdev_raid -- bdev/bdev_raid.sh@998 -- # run_test raid_superblock_test_4k raid_superblock_test raid1 2 00:15:33.147 23:33:12 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:15:33.147 23:33:12 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:33.147 23:33:12 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:33.147 ************************************ 00:15:33.147 START TEST raid_superblock_test_4k 00:15:33.147 ************************************ 00:15:33.147 23:33:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1125 -- # raid_superblock_test raid1 2 00:15:33.147 23:33:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:15:33.147 23:33:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:15:33.147 23:33:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:15:33.147 23:33:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:15:33.147 23:33:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:15:33.147 23:33:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:15:33.147 23:33:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:15:33.147 23:33:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:15:33.147 23:33:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:15:33.147 23:33:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@399 -- # local strip_size 00:15:33.147 23:33:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:15:33.147 23:33:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:15:33.147 23:33:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:15:33.147 23:33:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:15:33.147 23:33:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:15:33.147 23:33:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@412 -- # raid_pid=96603 00:15:33.147 23:33:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:15:33.147 23:33:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@413 -- # waitforlisten 96603 00:15:33.147 23:33:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@831 -- # '[' -z 96603 ']' 00:15:33.147 23:33:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:33.147 23:33:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:33.147 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:33.147 23:33:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:33.147 23:33:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:33.147 23:33:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:33.147 [2024-09-30 23:33:12.883013] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:15:33.147 [2024-09-30 23:33:12.883180] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96603 ] 00:15:33.408 [2024-09-30 23:33:13.043561] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:33.408 [2024-09-30 23:33:13.111853] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:15:33.408 [2024-09-30 23:33:13.188505] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:33.408 [2024-09-30 23:33:13.188621] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:33.978 23:33:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:33.978 23:33:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@864 -- # return 0 00:15:33.978 23:33:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:15:33.978 23:33:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:33.978 23:33:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:15:33.978 23:33:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:15:33.978 23:33:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:15:33.978 23:33:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:33.978 23:33:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:15:33.978 23:33:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:33.978 23:33:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc1 00:15:33.978 23:33:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:33.978 23:33:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:33.978 malloc1 00:15:33.978 23:33:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:33.978 23:33:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:33.978 23:33:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:33.978 23:33:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:33.978 [2024-09-30 23:33:13.738507] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:33.978 [2024-09-30 23:33:13.738646] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:33.978 [2024-09-30 23:33:13.738694] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:15:33.978 [2024-09-30 23:33:13.738732] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:33.978 [2024-09-30 23:33:13.741197] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:33.978 [2024-09-30 23:33:13.741274] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:33.978 pt1 00:15:33.978 23:33:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:33.978 23:33:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:15:33.978 23:33:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:33.978 23:33:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:15:33.978 23:33:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:15:33.978 23:33:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:15:33.978 23:33:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:33.978 23:33:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:15:33.978 23:33:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:33.978 23:33:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc2 00:15:33.978 23:33:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:33.978 23:33:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:33.978 malloc2 00:15:33.978 23:33:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:33.978 23:33:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:33.978 23:33:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:33.978 23:33:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:33.978 [2024-09-30 23:33:13.788883] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:33.978 [2024-09-30 23:33:13.789086] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:33.978 [2024-09-30 23:33:13.789170] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:15:33.978 [2024-09-30 23:33:13.789253] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:33.978 [2024-09-30 23:33:13.794391] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:33.978 [2024-09-30 23:33:13.794499] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:33.978 pt2 00:15:33.978 23:33:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:33.978 23:33:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:15:33.978 23:33:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:33.978 23:33:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:15:33.978 23:33:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:33.978 23:33:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:33.978 [2024-09-30 23:33:13.802804] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:33.978 [2024-09-30 23:33:13.805648] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:33.978 [2024-09-30 23:33:13.805877] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:15:33.978 [2024-09-30 23:33:13.805939] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:15:33.978 [2024-09-30 23:33:13.806274] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:15:33.978 [2024-09-30 23:33:13.806489] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:15:33.978 [2024-09-30 23:33:13.806543] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:15:33.979 [2024-09-30 23:33:13.806800] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:33.979 23:33:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:33.979 23:33:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:33.979 23:33:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:33.979 23:33:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:33.979 23:33:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:33.979 23:33:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:33.979 23:33:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:33.979 23:33:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:33.979 23:33:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:33.979 23:33:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:33.979 23:33:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:33.979 23:33:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:33.979 23:33:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:33.979 23:33:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:33.979 23:33:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:34.239 23:33:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:34.239 23:33:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:34.239 "name": "raid_bdev1", 00:15:34.239 "uuid": "1e70d14b-ac26-403b-ba1a-732951b1eeda", 00:15:34.239 "strip_size_kb": 0, 00:15:34.239 "state": "online", 00:15:34.239 "raid_level": "raid1", 00:15:34.239 "superblock": true, 00:15:34.239 "num_base_bdevs": 2, 00:15:34.239 "num_base_bdevs_discovered": 2, 00:15:34.239 "num_base_bdevs_operational": 2, 00:15:34.239 "base_bdevs_list": [ 00:15:34.239 { 00:15:34.239 "name": "pt1", 00:15:34.239 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:34.239 "is_configured": true, 00:15:34.239 "data_offset": 256, 00:15:34.239 "data_size": 7936 00:15:34.239 }, 00:15:34.239 { 00:15:34.239 "name": "pt2", 00:15:34.239 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:34.239 "is_configured": true, 00:15:34.239 "data_offset": 256, 00:15:34.239 "data_size": 7936 00:15:34.239 } 00:15:34.239 ] 00:15:34.239 }' 00:15:34.239 23:33:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:34.239 23:33:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:34.499 23:33:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:15:34.499 23:33:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:15:34.499 23:33:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:34.499 23:33:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:34.499 23:33:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@184 -- # local name 00:15:34.499 23:33:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:34.499 23:33:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:34.499 23:33:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:34.499 23:33:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:34.499 23:33:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:34.499 [2024-09-30 23:33:14.278245] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:34.499 23:33:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:34.499 23:33:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:34.499 "name": "raid_bdev1", 00:15:34.499 "aliases": [ 00:15:34.499 "1e70d14b-ac26-403b-ba1a-732951b1eeda" 00:15:34.499 ], 00:15:34.499 "product_name": "Raid Volume", 00:15:34.499 "block_size": 4096, 00:15:34.499 "num_blocks": 7936, 00:15:34.499 "uuid": "1e70d14b-ac26-403b-ba1a-732951b1eeda", 00:15:34.499 "assigned_rate_limits": { 00:15:34.499 "rw_ios_per_sec": 0, 00:15:34.499 "rw_mbytes_per_sec": 0, 00:15:34.499 "r_mbytes_per_sec": 0, 00:15:34.499 "w_mbytes_per_sec": 0 00:15:34.499 }, 00:15:34.499 "claimed": false, 00:15:34.499 "zoned": false, 00:15:34.499 "supported_io_types": { 00:15:34.499 "read": true, 00:15:34.499 "write": true, 00:15:34.499 "unmap": false, 00:15:34.499 "flush": false, 00:15:34.499 "reset": true, 00:15:34.499 "nvme_admin": false, 00:15:34.499 "nvme_io": false, 00:15:34.499 "nvme_io_md": false, 00:15:34.499 "write_zeroes": true, 00:15:34.499 "zcopy": false, 00:15:34.499 "get_zone_info": false, 00:15:34.499 "zone_management": false, 00:15:34.499 "zone_append": false, 00:15:34.499 "compare": false, 00:15:34.499 "compare_and_write": false, 00:15:34.499 "abort": false, 00:15:34.499 "seek_hole": false, 00:15:34.499 "seek_data": false, 00:15:34.499 "copy": false, 00:15:34.499 "nvme_iov_md": false 00:15:34.499 }, 00:15:34.499 "memory_domains": [ 00:15:34.499 { 00:15:34.499 "dma_device_id": "system", 00:15:34.499 "dma_device_type": 1 00:15:34.499 }, 00:15:34.499 { 00:15:34.499 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:34.499 "dma_device_type": 2 00:15:34.499 }, 00:15:34.499 { 00:15:34.499 "dma_device_id": "system", 00:15:34.499 "dma_device_type": 1 00:15:34.499 }, 00:15:34.499 { 00:15:34.499 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:34.499 "dma_device_type": 2 00:15:34.499 } 00:15:34.499 ], 00:15:34.499 "driver_specific": { 00:15:34.499 "raid": { 00:15:34.499 "uuid": "1e70d14b-ac26-403b-ba1a-732951b1eeda", 00:15:34.499 "strip_size_kb": 0, 00:15:34.499 "state": "online", 00:15:34.499 "raid_level": "raid1", 00:15:34.499 "superblock": true, 00:15:34.499 "num_base_bdevs": 2, 00:15:34.499 "num_base_bdevs_discovered": 2, 00:15:34.499 "num_base_bdevs_operational": 2, 00:15:34.499 "base_bdevs_list": [ 00:15:34.499 { 00:15:34.499 "name": "pt1", 00:15:34.499 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:34.499 "is_configured": true, 00:15:34.499 "data_offset": 256, 00:15:34.499 "data_size": 7936 00:15:34.499 }, 00:15:34.499 { 00:15:34.499 "name": "pt2", 00:15:34.499 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:34.499 "is_configured": true, 00:15:34.499 "data_offset": 256, 00:15:34.499 "data_size": 7936 00:15:34.499 } 00:15:34.499 ] 00:15:34.499 } 00:15:34.499 } 00:15:34.499 }' 00:15:34.499 23:33:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:34.499 23:33:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:15:34.499 pt2' 00:15:34.760 23:33:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:34.760 23:33:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:15:34.760 23:33:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:34.760 23:33:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:15:34.760 23:33:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:34.760 23:33:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:34.760 23:33:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:34.760 23:33:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:34.760 23:33:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:15:34.760 23:33:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:15:34.760 23:33:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:34.760 23:33:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:15:34.760 23:33:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:34.760 23:33:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:34.760 23:33:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:34.760 23:33:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:34.760 23:33:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:15:34.760 23:33:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:15:34.760 23:33:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:34.760 23:33:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:34.760 23:33:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:34.760 23:33:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:15:34.760 [2024-09-30 23:33:14.493774] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:34.760 23:33:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:34.760 23:33:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=1e70d14b-ac26-403b-ba1a-732951b1eeda 00:15:34.760 23:33:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@436 -- # '[' -z 1e70d14b-ac26-403b-ba1a-732951b1eeda ']' 00:15:34.760 23:33:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:34.760 23:33:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:34.760 23:33:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:34.760 [2024-09-30 23:33:14.533489] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:34.760 [2024-09-30 23:33:14.533558] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:34.760 [2024-09-30 23:33:14.533633] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:34.760 [2024-09-30 23:33:14.533716] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:34.760 [2024-09-30 23:33:14.533755] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:15:34.760 23:33:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:34.760 23:33:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:34.760 23:33:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:34.760 23:33:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:34.760 23:33:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:15:34.760 23:33:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:34.760 23:33:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:15:34.760 23:33:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:15:34.760 23:33:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:15:34.760 23:33:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:15:34.760 23:33:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:34.760 23:33:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:34.760 23:33:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:34.760 23:33:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:15:34.760 23:33:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:15:34.760 23:33:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:34.760 23:33:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:34.760 23:33:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.020 23:33:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:15:35.020 23:33:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:15:35.020 23:33:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.020 23:33:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:35.020 23:33:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.020 23:33:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:15:35.020 23:33:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:15:35.020 23:33:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@650 -- # local es=0 00:15:35.020 23:33:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:15:35.020 23:33:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:15:35.020 23:33:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:35.020 23:33:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:15:35.020 23:33:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:35.020 23:33:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:15:35.020 23:33:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.020 23:33:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:35.020 [2024-09-30 23:33:14.677257] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:15:35.020 [2024-09-30 23:33:14.679312] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:15:35.020 [2024-09-30 23:33:14.679417] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:15:35.020 [2024-09-30 23:33:14.679502] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:15:35.020 [2024-09-30 23:33:14.679540] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:35.020 [2024-09-30 23:33:14.679559] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state configuring 00:15:35.020 request: 00:15:35.020 { 00:15:35.020 "name": "raid_bdev1", 00:15:35.020 "raid_level": "raid1", 00:15:35.020 "base_bdevs": [ 00:15:35.020 "malloc1", 00:15:35.020 "malloc2" 00:15:35.020 ], 00:15:35.020 "superblock": false, 00:15:35.020 "method": "bdev_raid_create", 00:15:35.020 "req_id": 1 00:15:35.020 } 00:15:35.020 Got JSON-RPC error response 00:15:35.020 response: 00:15:35.020 { 00:15:35.020 "code": -17, 00:15:35.020 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:15:35.020 } 00:15:35.020 23:33:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:15:35.020 23:33:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@653 -- # es=1 00:15:35.020 23:33:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:35.020 23:33:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:35.020 23:33:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:35.020 23:33:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:35.020 23:33:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:15:35.020 23:33:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.020 23:33:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:35.020 23:33:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.020 23:33:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:15:35.020 23:33:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:15:35.020 23:33:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:35.020 23:33:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.020 23:33:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:35.020 [2024-09-30 23:33:14.733145] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:35.020 [2024-09-30 23:33:14.733225] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:35.020 [2024-09-30 23:33:14.733256] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:15:35.020 [2024-09-30 23:33:14.733281] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:35.020 [2024-09-30 23:33:14.735533] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:35.020 [2024-09-30 23:33:14.735597] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:35.020 [2024-09-30 23:33:14.735674] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:15:35.020 [2024-09-30 23:33:14.735725] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:35.020 pt1 00:15:35.020 23:33:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.020 23:33:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:15:35.020 23:33:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:35.020 23:33:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:35.020 23:33:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:35.020 23:33:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:35.020 23:33:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:35.020 23:33:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:35.020 23:33:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:35.020 23:33:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:35.020 23:33:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:35.020 23:33:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:35.020 23:33:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:35.020 23:33:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.020 23:33:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:35.020 23:33:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.021 23:33:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:35.021 "name": "raid_bdev1", 00:15:35.021 "uuid": "1e70d14b-ac26-403b-ba1a-732951b1eeda", 00:15:35.021 "strip_size_kb": 0, 00:15:35.021 "state": "configuring", 00:15:35.021 "raid_level": "raid1", 00:15:35.021 "superblock": true, 00:15:35.021 "num_base_bdevs": 2, 00:15:35.021 "num_base_bdevs_discovered": 1, 00:15:35.021 "num_base_bdevs_operational": 2, 00:15:35.021 "base_bdevs_list": [ 00:15:35.021 { 00:15:35.021 "name": "pt1", 00:15:35.021 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:35.021 "is_configured": true, 00:15:35.021 "data_offset": 256, 00:15:35.021 "data_size": 7936 00:15:35.021 }, 00:15:35.021 { 00:15:35.021 "name": null, 00:15:35.021 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:35.021 "is_configured": false, 00:15:35.021 "data_offset": 256, 00:15:35.021 "data_size": 7936 00:15:35.021 } 00:15:35.021 ] 00:15:35.021 }' 00:15:35.021 23:33:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:35.021 23:33:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:35.590 23:33:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:15:35.590 23:33:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:15:35.590 23:33:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:15:35.590 23:33:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:35.590 23:33:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.590 23:33:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:35.590 [2024-09-30 23:33:15.156417] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:35.590 [2024-09-30 23:33:15.156504] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:35.590 [2024-09-30 23:33:15.156538] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:15:35.590 [2024-09-30 23:33:15.156566] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:35.590 [2024-09-30 23:33:15.156929] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:35.590 [2024-09-30 23:33:15.156979] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:35.590 [2024-09-30 23:33:15.157056] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:15:35.591 [2024-09-30 23:33:15.157099] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:35.591 [2024-09-30 23:33:15.157194] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:15:35.591 [2024-09-30 23:33:15.157226] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:15:35.591 [2024-09-30 23:33:15.157465] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:15:35.591 [2024-09-30 23:33:15.157607] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:15:35.591 [2024-09-30 23:33:15.157651] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006980 00:15:35.591 [2024-09-30 23:33:15.157769] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:35.591 pt2 00:15:35.591 23:33:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.591 23:33:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:15:35.591 23:33:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:15:35.591 23:33:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:35.591 23:33:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:35.591 23:33:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:35.591 23:33:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:35.591 23:33:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:35.591 23:33:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:35.591 23:33:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:35.591 23:33:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:35.591 23:33:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:35.591 23:33:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:35.591 23:33:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:35.591 23:33:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:35.591 23:33:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.591 23:33:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:35.591 23:33:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.591 23:33:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:35.591 "name": "raid_bdev1", 00:15:35.591 "uuid": "1e70d14b-ac26-403b-ba1a-732951b1eeda", 00:15:35.591 "strip_size_kb": 0, 00:15:35.591 "state": "online", 00:15:35.591 "raid_level": "raid1", 00:15:35.591 "superblock": true, 00:15:35.591 "num_base_bdevs": 2, 00:15:35.591 "num_base_bdevs_discovered": 2, 00:15:35.591 "num_base_bdevs_operational": 2, 00:15:35.591 "base_bdevs_list": [ 00:15:35.591 { 00:15:35.591 "name": "pt1", 00:15:35.591 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:35.591 "is_configured": true, 00:15:35.591 "data_offset": 256, 00:15:35.591 "data_size": 7936 00:15:35.591 }, 00:15:35.591 { 00:15:35.591 "name": "pt2", 00:15:35.591 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:35.591 "is_configured": true, 00:15:35.591 "data_offset": 256, 00:15:35.591 "data_size": 7936 00:15:35.591 } 00:15:35.591 ] 00:15:35.591 }' 00:15:35.591 23:33:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:35.591 23:33:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:35.851 23:33:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:15:35.851 23:33:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:15:35.851 23:33:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:35.851 23:33:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:35.851 23:33:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@184 -- # local name 00:15:35.851 23:33:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:35.851 23:33:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:35.851 23:33:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:35.851 23:33:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.851 23:33:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:35.851 [2024-09-30 23:33:15.627878] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:35.851 23:33:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.851 23:33:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:35.851 "name": "raid_bdev1", 00:15:35.851 "aliases": [ 00:15:35.851 "1e70d14b-ac26-403b-ba1a-732951b1eeda" 00:15:35.851 ], 00:15:35.851 "product_name": "Raid Volume", 00:15:35.851 "block_size": 4096, 00:15:35.851 "num_blocks": 7936, 00:15:35.851 "uuid": "1e70d14b-ac26-403b-ba1a-732951b1eeda", 00:15:35.851 "assigned_rate_limits": { 00:15:35.851 "rw_ios_per_sec": 0, 00:15:35.851 "rw_mbytes_per_sec": 0, 00:15:35.851 "r_mbytes_per_sec": 0, 00:15:35.851 "w_mbytes_per_sec": 0 00:15:35.851 }, 00:15:35.851 "claimed": false, 00:15:35.851 "zoned": false, 00:15:35.851 "supported_io_types": { 00:15:35.851 "read": true, 00:15:35.851 "write": true, 00:15:35.851 "unmap": false, 00:15:35.851 "flush": false, 00:15:35.851 "reset": true, 00:15:35.851 "nvme_admin": false, 00:15:35.851 "nvme_io": false, 00:15:35.851 "nvme_io_md": false, 00:15:35.851 "write_zeroes": true, 00:15:35.851 "zcopy": false, 00:15:35.851 "get_zone_info": false, 00:15:35.851 "zone_management": false, 00:15:35.851 "zone_append": false, 00:15:35.851 "compare": false, 00:15:35.851 "compare_and_write": false, 00:15:35.851 "abort": false, 00:15:35.851 "seek_hole": false, 00:15:35.851 "seek_data": false, 00:15:35.851 "copy": false, 00:15:35.851 "nvme_iov_md": false 00:15:35.851 }, 00:15:35.851 "memory_domains": [ 00:15:35.851 { 00:15:35.851 "dma_device_id": "system", 00:15:35.851 "dma_device_type": 1 00:15:35.851 }, 00:15:35.851 { 00:15:35.851 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:35.851 "dma_device_type": 2 00:15:35.851 }, 00:15:35.851 { 00:15:35.851 "dma_device_id": "system", 00:15:35.851 "dma_device_type": 1 00:15:35.851 }, 00:15:35.851 { 00:15:35.851 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:35.851 "dma_device_type": 2 00:15:35.851 } 00:15:35.851 ], 00:15:35.851 "driver_specific": { 00:15:35.851 "raid": { 00:15:35.851 "uuid": "1e70d14b-ac26-403b-ba1a-732951b1eeda", 00:15:35.851 "strip_size_kb": 0, 00:15:35.851 "state": "online", 00:15:35.851 "raid_level": "raid1", 00:15:35.851 "superblock": true, 00:15:35.851 "num_base_bdevs": 2, 00:15:35.851 "num_base_bdevs_discovered": 2, 00:15:35.851 "num_base_bdevs_operational": 2, 00:15:35.851 "base_bdevs_list": [ 00:15:35.851 { 00:15:35.851 "name": "pt1", 00:15:35.851 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:35.851 "is_configured": true, 00:15:35.851 "data_offset": 256, 00:15:35.851 "data_size": 7936 00:15:35.851 }, 00:15:35.851 { 00:15:35.851 "name": "pt2", 00:15:35.851 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:35.851 "is_configured": true, 00:15:35.851 "data_offset": 256, 00:15:35.851 "data_size": 7936 00:15:35.851 } 00:15:35.851 ] 00:15:35.851 } 00:15:35.851 } 00:15:35.851 }' 00:15:35.851 23:33:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:36.112 23:33:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:15:36.112 pt2' 00:15:36.112 23:33:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:36.112 23:33:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:15:36.112 23:33:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:36.112 23:33:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:36.112 23:33:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:15:36.112 23:33:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.112 23:33:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:36.112 23:33:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:36.112 23:33:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:15:36.112 23:33:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:15:36.112 23:33:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:36.112 23:33:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:15:36.112 23:33:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.112 23:33:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:36.112 23:33:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:36.112 23:33:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:36.112 23:33:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:15:36.112 23:33:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:15:36.112 23:33:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:36.112 23:33:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.112 23:33:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:36.112 23:33:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:15:36.112 [2024-09-30 23:33:15.875602] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:36.112 23:33:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:36.112 23:33:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # '[' 1e70d14b-ac26-403b-ba1a-732951b1eeda '!=' 1e70d14b-ac26-403b-ba1a-732951b1eeda ']' 00:15:36.112 23:33:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:15:36.112 23:33:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@198 -- # case $1 in 00:15:36.112 23:33:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@199 -- # return 0 00:15:36.112 23:33:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:15:36.112 23:33:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.112 23:33:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:36.112 [2024-09-30 23:33:15.923291] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:15:36.112 23:33:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:36.112 23:33:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:36.112 23:33:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:36.112 23:33:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:36.112 23:33:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:36.112 23:33:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:36.112 23:33:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:36.112 23:33:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:36.112 23:33:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:36.112 23:33:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:36.112 23:33:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:36.112 23:33:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:36.112 23:33:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:36.112 23:33:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.112 23:33:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:36.112 23:33:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:36.372 23:33:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:36.372 "name": "raid_bdev1", 00:15:36.372 "uuid": "1e70d14b-ac26-403b-ba1a-732951b1eeda", 00:15:36.372 "strip_size_kb": 0, 00:15:36.372 "state": "online", 00:15:36.372 "raid_level": "raid1", 00:15:36.372 "superblock": true, 00:15:36.372 "num_base_bdevs": 2, 00:15:36.372 "num_base_bdevs_discovered": 1, 00:15:36.372 "num_base_bdevs_operational": 1, 00:15:36.372 "base_bdevs_list": [ 00:15:36.372 { 00:15:36.372 "name": null, 00:15:36.372 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:36.372 "is_configured": false, 00:15:36.372 "data_offset": 0, 00:15:36.372 "data_size": 7936 00:15:36.372 }, 00:15:36.372 { 00:15:36.372 "name": "pt2", 00:15:36.372 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:36.372 "is_configured": true, 00:15:36.372 "data_offset": 256, 00:15:36.372 "data_size": 7936 00:15:36.372 } 00:15:36.372 ] 00:15:36.372 }' 00:15:36.372 23:33:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:36.372 23:33:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:36.632 23:33:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:36.632 23:33:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.632 23:33:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:36.632 [2024-09-30 23:33:16.350540] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:36.632 [2024-09-30 23:33:16.350607] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:36.632 [2024-09-30 23:33:16.350683] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:36.632 [2024-09-30 23:33:16.350734] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:36.632 [2024-09-30 23:33:16.350763] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name raid_bdev1, state offline 00:15:36.632 23:33:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:36.632 23:33:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:36.632 23:33:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.632 23:33:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:36.632 23:33:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:15:36.632 23:33:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:36.632 23:33:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:15:36.632 23:33:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:15:36.632 23:33:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:15:36.632 23:33:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:15:36.632 23:33:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:15:36.632 23:33:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.632 23:33:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:36.632 23:33:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:36.632 23:33:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:15:36.632 23:33:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:15:36.632 23:33:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:15:36.632 23:33:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:15:36.632 23:33:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@519 -- # i=1 00:15:36.632 23:33:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:36.632 23:33:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.632 23:33:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:36.632 [2024-09-30 23:33:16.426421] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:36.632 [2024-09-30 23:33:16.426501] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:36.632 [2024-09-30 23:33:16.426532] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:15:36.632 [2024-09-30 23:33:16.426554] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:36.632 [2024-09-30 23:33:16.428851] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:36.632 pt2 00:15:36.632 [2024-09-30 23:33:16.428930] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:36.632 [2024-09-30 23:33:16.429001] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:15:36.632 [2024-09-30 23:33:16.429029] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:36.632 [2024-09-30 23:33:16.429094] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:15:36.632 [2024-09-30 23:33:16.429102] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:15:36.632 [2024-09-30 23:33:16.429303] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:15:36.633 [2024-09-30 23:33:16.429412] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:15:36.633 [2024-09-30 23:33:16.429424] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006d00 00:15:36.633 [2024-09-30 23:33:16.429511] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:36.633 23:33:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:36.633 23:33:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:36.633 23:33:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:36.633 23:33:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:36.633 23:33:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:36.633 23:33:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:36.633 23:33:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:36.633 23:33:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:36.633 23:33:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:36.633 23:33:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:36.633 23:33:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:36.633 23:33:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:36.633 23:33:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:36.633 23:33:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.633 23:33:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:36.633 23:33:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:36.633 23:33:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:36.633 "name": "raid_bdev1", 00:15:36.633 "uuid": "1e70d14b-ac26-403b-ba1a-732951b1eeda", 00:15:36.633 "strip_size_kb": 0, 00:15:36.633 "state": "online", 00:15:36.633 "raid_level": "raid1", 00:15:36.633 "superblock": true, 00:15:36.633 "num_base_bdevs": 2, 00:15:36.633 "num_base_bdevs_discovered": 1, 00:15:36.633 "num_base_bdevs_operational": 1, 00:15:36.633 "base_bdevs_list": [ 00:15:36.633 { 00:15:36.633 "name": null, 00:15:36.633 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:36.633 "is_configured": false, 00:15:36.633 "data_offset": 256, 00:15:36.633 "data_size": 7936 00:15:36.633 }, 00:15:36.633 { 00:15:36.633 "name": "pt2", 00:15:36.633 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:36.633 "is_configured": true, 00:15:36.633 "data_offset": 256, 00:15:36.633 "data_size": 7936 00:15:36.633 } 00:15:36.633 ] 00:15:36.633 }' 00:15:36.633 23:33:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:36.633 23:33:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:37.203 23:33:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:37.203 23:33:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.203 23:33:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:37.203 [2024-09-30 23:33:16.889634] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:37.203 [2024-09-30 23:33:16.889697] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:37.203 [2024-09-30 23:33:16.889762] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:37.203 [2024-09-30 23:33:16.889797] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:37.203 [2024-09-30 23:33:16.889808] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name raid_bdev1, state offline 00:15:37.203 23:33:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.203 23:33:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:37.203 23:33:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:15:37.203 23:33:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.203 23:33:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:37.203 23:33:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.203 23:33:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:15:37.203 23:33:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:15:37.203 23:33:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:15:37.203 23:33:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:37.203 23:33:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.203 23:33:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:37.203 [2024-09-30 23:33:16.953493] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:37.203 [2024-09-30 23:33:16.953575] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:37.203 [2024-09-30 23:33:16.953598] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:15:37.203 [2024-09-30 23:33:16.953613] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:37.203 [2024-09-30 23:33:16.955776] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:37.203 [2024-09-30 23:33:16.955814] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:37.203 [2024-09-30 23:33:16.955884] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:15:37.203 [2024-09-30 23:33:16.955920] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:37.203 [2024-09-30 23:33:16.956005] bdev_raid.c:3675:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:15:37.203 [2024-09-30 23:33:16.956017] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:37.203 [2024-09-30 23:33:16.956038] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007080 name raid_bdev1, state configuring 00:15:37.203 [2024-09-30 23:33:16.956075] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:37.203 [2024-09-30 23:33:16.956129] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007400 00:15:37.203 [2024-09-30 23:33:16.956140] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:15:37.203 [2024-09-30 23:33:16.956340] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:15:37.203 [2024-09-30 23:33:16.956445] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007400 00:15:37.203 [2024-09-30 23:33:16.956454] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007400 00:15:37.203 [2024-09-30 23:33:16.956551] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:37.203 pt1 00:15:37.203 23:33:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.203 23:33:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:15:37.203 23:33:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:37.203 23:33:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:37.203 23:33:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:37.203 23:33:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:37.203 23:33:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:37.203 23:33:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:37.203 23:33:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:37.203 23:33:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:37.203 23:33:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:37.203 23:33:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:37.203 23:33:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:37.203 23:33:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:37.203 23:33:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.203 23:33:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:37.203 23:33:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.203 23:33:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:37.203 "name": "raid_bdev1", 00:15:37.203 "uuid": "1e70d14b-ac26-403b-ba1a-732951b1eeda", 00:15:37.203 "strip_size_kb": 0, 00:15:37.203 "state": "online", 00:15:37.203 "raid_level": "raid1", 00:15:37.203 "superblock": true, 00:15:37.203 "num_base_bdevs": 2, 00:15:37.204 "num_base_bdevs_discovered": 1, 00:15:37.204 "num_base_bdevs_operational": 1, 00:15:37.204 "base_bdevs_list": [ 00:15:37.204 { 00:15:37.204 "name": null, 00:15:37.204 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:37.204 "is_configured": false, 00:15:37.204 "data_offset": 256, 00:15:37.204 "data_size": 7936 00:15:37.204 }, 00:15:37.204 { 00:15:37.204 "name": "pt2", 00:15:37.204 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:37.204 "is_configured": true, 00:15:37.204 "data_offset": 256, 00:15:37.204 "data_size": 7936 00:15:37.204 } 00:15:37.204 ] 00:15:37.204 }' 00:15:37.204 23:33:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:37.204 23:33:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:37.773 23:33:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:15:37.773 23:33:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:15:37.773 23:33:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.773 23:33:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:37.773 23:33:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.773 23:33:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:15:37.773 23:33:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:37.773 23:33:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.773 23:33:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:37.773 23:33:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:15:37.773 [2024-09-30 23:33:17.432897] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:37.773 23:33:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.773 23:33:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # '[' 1e70d14b-ac26-403b-ba1a-732951b1eeda '!=' 1e70d14b-ac26-403b-ba1a-732951b1eeda ']' 00:15:37.773 23:33:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@563 -- # killprocess 96603 00:15:37.773 23:33:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@950 -- # '[' -z 96603 ']' 00:15:37.773 23:33:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@954 -- # kill -0 96603 00:15:37.773 23:33:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@955 -- # uname 00:15:37.773 23:33:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:37.773 23:33:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 96603 00:15:37.773 killing process with pid 96603 00:15:37.773 23:33:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:37.773 23:33:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:37.773 23:33:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@968 -- # echo 'killing process with pid 96603' 00:15:37.773 23:33:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@969 -- # kill 96603 00:15:37.773 [2024-09-30 23:33:17.517764] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:37.773 [2024-09-30 23:33:17.517821] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:37.773 [2024-09-30 23:33:17.517870] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:37.773 [2024-09-30 23:33:17.517879] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name raid_bdev1, state offline 00:15:37.773 23:33:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@974 -- # wait 96603 00:15:37.773 [2024-09-30 23:33:17.559906] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:38.344 ************************************ 00:15:38.344 END TEST raid_superblock_test_4k 00:15:38.344 ************************************ 00:15:38.344 23:33:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@565 -- # return 0 00:15:38.344 00:15:38.344 real 0m5.144s 00:15:38.344 user 0m8.204s 00:15:38.344 sys 0m1.140s 00:15:38.344 23:33:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:38.344 23:33:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:38.344 23:33:17 bdev_raid -- bdev/bdev_raid.sh@999 -- # '[' true = true ']' 00:15:38.344 23:33:18 bdev_raid -- bdev/bdev_raid.sh@1000 -- # run_test raid_rebuild_test_sb_4k raid_rebuild_test raid1 2 true false true 00:15:38.344 23:33:18 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:15:38.344 23:33:18 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:38.344 23:33:18 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:38.344 ************************************ 00:15:38.344 START TEST raid_rebuild_test_sb_4k 00:15:38.344 ************************************ 00:15:38.344 23:33:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 2 true false true 00:15:38.344 23:33:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:15:38.344 23:33:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:15:38.344 23:33:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:15:38.344 23:33:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:15:38.344 23:33:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@573 -- # local verify=true 00:15:38.344 23:33:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:15:38.344 23:33:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:38.344 23:33:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:15:38.344 23:33:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:38.344 23:33:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:38.344 23:33:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:15:38.344 23:33:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:38.344 23:33:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:38.344 23:33:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:15:38.344 23:33:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:15:38.344 23:33:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:15:38.344 23:33:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # local strip_size 00:15:38.344 23:33:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@577 -- # local create_arg 00:15:38.344 23:33:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:15:38.344 23:33:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@579 -- # local data_offset 00:15:38.344 23:33:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:15:38.344 23:33:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:15:38.344 23:33:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:15:38.344 23:33:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:15:38.344 23:33:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@597 -- # raid_pid=96920 00:15:38.344 23:33:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:15:38.344 23:33:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@598 -- # waitforlisten 96920 00:15:38.344 23:33:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@831 -- # '[' -z 96920 ']' 00:15:38.344 23:33:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:38.344 23:33:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:38.344 23:33:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:38.344 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:38.344 23:33:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:38.344 23:33:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:38.344 [2024-09-30 23:33:18.116552] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:15:38.344 [2024-09-30 23:33:18.116788] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.ealI/O size of 3145728 is greater than zero copy threshold (65536). 00:15:38.344 Zero copy mechanism will not be used. 00:15:38.344 :6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96920 ] 00:15:38.604 [2024-09-30 23:33:18.276888] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:38.604 [2024-09-30 23:33:18.347528] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:15:38.604 [2024-09-30 23:33:18.424602] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:38.604 [2024-09-30 23:33:18.424727] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:39.174 23:33:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:39.174 23:33:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@864 -- # return 0 00:15:39.174 23:33:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:39.174 23:33:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev1_malloc 00:15:39.174 23:33:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.174 23:33:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:39.174 BaseBdev1_malloc 00:15:39.174 23:33:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.174 23:33:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:15:39.174 23:33:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.174 23:33:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:39.174 [2024-09-30 23:33:18.971736] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:15:39.174 [2024-09-30 23:33:18.971878] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:39.174 [2024-09-30 23:33:18.971924] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:15:39.174 [2024-09-30 23:33:18.971962] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:39.174 [2024-09-30 23:33:18.974311] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:39.174 [2024-09-30 23:33:18.974376] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:39.174 BaseBdev1 00:15:39.174 23:33:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.174 23:33:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:39.174 23:33:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev2_malloc 00:15:39.174 23:33:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.174 23:33:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:39.174 BaseBdev2_malloc 00:15:39.174 23:33:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.174 23:33:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:15:39.174 23:33:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.174 23:33:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:39.434 [2024-09-30 23:33:19.028050] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:15:39.434 [2024-09-30 23:33:19.028294] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:39.434 [2024-09-30 23:33:19.028399] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:15:39.434 [2024-09-30 23:33:19.028499] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:39.434 [2024-09-30 23:33:19.033034] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:39.434 [2024-09-30 23:33:19.033168] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:15:39.434 BaseBdev2 00:15:39.434 23:33:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.434 23:33:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -b spare_malloc 00:15:39.434 23:33:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.434 23:33:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:39.434 spare_malloc 00:15:39.434 23:33:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.434 23:33:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:15:39.434 23:33:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.434 23:33:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:39.434 spare_delay 00:15:39.434 23:33:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.434 23:33:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:39.434 23:33:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.434 23:33:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:39.434 [2024-09-30 23:33:19.076966] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:39.434 [2024-09-30 23:33:19.077062] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:39.434 [2024-09-30 23:33:19.077100] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:15:39.434 [2024-09-30 23:33:19.077128] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:39.434 [2024-09-30 23:33:19.079396] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:39.434 [2024-09-30 23:33:19.079462] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:39.434 spare 00:15:39.434 23:33:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.434 23:33:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:15:39.434 23:33:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.434 23:33:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:39.434 [2024-09-30 23:33:19.089007] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:39.434 [2024-09-30 23:33:19.091029] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:39.434 [2024-09-30 23:33:19.091217] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:15:39.434 [2024-09-30 23:33:19.091250] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:15:39.434 [2024-09-30 23:33:19.091548] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:15:39.434 [2024-09-30 23:33:19.091730] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:15:39.434 [2024-09-30 23:33:19.091779] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:15:39.434 [2024-09-30 23:33:19.091959] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:39.434 23:33:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.434 23:33:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:39.434 23:33:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:39.434 23:33:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:39.434 23:33:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:39.434 23:33:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:39.434 23:33:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:39.434 23:33:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:39.434 23:33:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:39.434 23:33:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:39.434 23:33:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:39.434 23:33:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:39.434 23:33:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:39.434 23:33:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.434 23:33:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:39.434 23:33:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.434 23:33:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:39.434 "name": "raid_bdev1", 00:15:39.434 "uuid": "ab42845d-961b-4add-9497-d403bf25a8dd", 00:15:39.434 "strip_size_kb": 0, 00:15:39.434 "state": "online", 00:15:39.434 "raid_level": "raid1", 00:15:39.434 "superblock": true, 00:15:39.434 "num_base_bdevs": 2, 00:15:39.434 "num_base_bdevs_discovered": 2, 00:15:39.434 "num_base_bdevs_operational": 2, 00:15:39.434 "base_bdevs_list": [ 00:15:39.434 { 00:15:39.434 "name": "BaseBdev1", 00:15:39.434 "uuid": "e4d65c8a-2a29-5b37-9b47-457faf7a9d07", 00:15:39.434 "is_configured": true, 00:15:39.434 "data_offset": 256, 00:15:39.434 "data_size": 7936 00:15:39.434 }, 00:15:39.434 { 00:15:39.434 "name": "BaseBdev2", 00:15:39.434 "uuid": "a8557a25-eeba-5b94-a13d-a0d6473b73fe", 00:15:39.434 "is_configured": true, 00:15:39.434 "data_offset": 256, 00:15:39.434 "data_size": 7936 00:15:39.434 } 00:15:39.434 ] 00:15:39.434 }' 00:15:39.434 23:33:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:39.434 23:33:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:39.693 23:33:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:39.693 23:33:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:15:39.693 23:33:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.693 23:33:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:39.693 [2024-09-30 23:33:19.544550] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:39.953 23:33:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.953 23:33:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:15:39.953 23:33:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:15:39.953 23:33:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:39.953 23:33:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.953 23:33:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:39.953 23:33:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.953 23:33:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:15:39.953 23:33:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:15:39.953 23:33:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:15:39.953 23:33:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:15:39.953 23:33:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:15:39.953 23:33:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:39.953 23:33:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:15:39.953 23:33:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:39.953 23:33:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:15:39.953 23:33:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:39.953 23:33:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@12 -- # local i 00:15:39.953 23:33:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:39.953 23:33:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:39.953 23:33:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:15:40.213 [2024-09-30 23:33:19.819904] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:15:40.213 /dev/nbd0 00:15:40.213 23:33:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:40.213 23:33:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:40.213 23:33:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:15:40.213 23:33:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@869 -- # local i 00:15:40.213 23:33:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:15:40.213 23:33:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:15:40.213 23:33:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:15:40.213 23:33:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # break 00:15:40.213 23:33:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:15:40.213 23:33:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:15:40.213 23:33:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:40.213 1+0 records in 00:15:40.213 1+0 records out 00:15:40.213 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000369491 s, 11.1 MB/s 00:15:40.213 23:33:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:40.213 23:33:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # size=4096 00:15:40.213 23:33:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:40.213 23:33:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:15:40.213 23:33:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # return 0 00:15:40.213 23:33:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:40.213 23:33:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:40.213 23:33:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:15:40.213 23:33:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:15:40.213 23:33:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=4096 count=7936 oflag=direct 00:15:40.782 7936+0 records in 00:15:40.782 7936+0 records out 00:15:40.782 32505856 bytes (33 MB, 31 MiB) copied, 0.594832 s, 54.6 MB/s 00:15:40.782 23:33:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:15:40.782 23:33:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:40.782 23:33:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:15:40.782 23:33:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:40.782 23:33:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@51 -- # local i 00:15:40.782 23:33:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:40.782 23:33:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:41.042 23:33:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:41.042 [2024-09-30 23:33:20.704314] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:41.042 23:33:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:41.042 23:33:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:41.042 23:33:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:41.042 23:33:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:41.042 23:33:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:41.042 23:33:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:15:41.042 23:33:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:15:41.042 23:33:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:15:41.042 23:33:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:41.042 23:33:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:41.042 [2024-09-30 23:33:20.717448] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:41.042 23:33:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:41.042 23:33:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:41.042 23:33:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:41.042 23:33:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:41.042 23:33:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:41.042 23:33:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:41.042 23:33:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:41.042 23:33:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:41.042 23:33:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:41.042 23:33:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:41.042 23:33:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:41.042 23:33:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:41.042 23:33:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:41.042 23:33:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:41.042 23:33:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:41.042 23:33:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:41.042 23:33:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:41.042 "name": "raid_bdev1", 00:15:41.042 "uuid": "ab42845d-961b-4add-9497-d403bf25a8dd", 00:15:41.042 "strip_size_kb": 0, 00:15:41.042 "state": "online", 00:15:41.042 "raid_level": "raid1", 00:15:41.042 "superblock": true, 00:15:41.042 "num_base_bdevs": 2, 00:15:41.042 "num_base_bdevs_discovered": 1, 00:15:41.042 "num_base_bdevs_operational": 1, 00:15:41.042 "base_bdevs_list": [ 00:15:41.042 { 00:15:41.042 "name": null, 00:15:41.042 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:41.042 "is_configured": false, 00:15:41.042 "data_offset": 0, 00:15:41.042 "data_size": 7936 00:15:41.042 }, 00:15:41.042 { 00:15:41.042 "name": "BaseBdev2", 00:15:41.042 "uuid": "a8557a25-eeba-5b94-a13d-a0d6473b73fe", 00:15:41.042 "is_configured": true, 00:15:41.042 "data_offset": 256, 00:15:41.042 "data_size": 7936 00:15:41.042 } 00:15:41.042 ] 00:15:41.042 }' 00:15:41.042 23:33:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:41.042 23:33:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:41.612 23:33:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:41.612 23:33:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:41.612 23:33:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:41.612 [2024-09-30 23:33:21.164680] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:41.612 [2024-09-30 23:33:21.172000] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d0c0 00:15:41.612 23:33:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:41.612 23:33:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@647 -- # sleep 1 00:15:41.612 [2024-09-30 23:33:21.174254] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:42.551 23:33:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:42.551 23:33:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:42.551 23:33:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:42.551 23:33:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:42.551 23:33:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:42.551 23:33:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:42.551 23:33:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:42.551 23:33:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:42.551 23:33:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:42.551 23:33:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:42.551 23:33:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:42.551 "name": "raid_bdev1", 00:15:42.551 "uuid": "ab42845d-961b-4add-9497-d403bf25a8dd", 00:15:42.551 "strip_size_kb": 0, 00:15:42.551 "state": "online", 00:15:42.551 "raid_level": "raid1", 00:15:42.551 "superblock": true, 00:15:42.551 "num_base_bdevs": 2, 00:15:42.551 "num_base_bdevs_discovered": 2, 00:15:42.551 "num_base_bdevs_operational": 2, 00:15:42.551 "process": { 00:15:42.551 "type": "rebuild", 00:15:42.551 "target": "spare", 00:15:42.551 "progress": { 00:15:42.551 "blocks": 2560, 00:15:42.551 "percent": 32 00:15:42.551 } 00:15:42.551 }, 00:15:42.551 "base_bdevs_list": [ 00:15:42.551 { 00:15:42.551 "name": "spare", 00:15:42.551 "uuid": "6bba25f7-e51e-56f5-b5da-c9e2f05260d0", 00:15:42.551 "is_configured": true, 00:15:42.551 "data_offset": 256, 00:15:42.551 "data_size": 7936 00:15:42.551 }, 00:15:42.551 { 00:15:42.551 "name": "BaseBdev2", 00:15:42.551 "uuid": "a8557a25-eeba-5b94-a13d-a0d6473b73fe", 00:15:42.551 "is_configured": true, 00:15:42.551 "data_offset": 256, 00:15:42.551 "data_size": 7936 00:15:42.551 } 00:15:42.551 ] 00:15:42.551 }' 00:15:42.551 23:33:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:42.551 23:33:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:42.551 23:33:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:42.551 23:33:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:42.551 23:33:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:15:42.551 23:33:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:42.551 23:33:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:42.551 [2024-09-30 23:33:22.337739] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:42.551 [2024-09-30 23:33:22.382478] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:42.551 [2024-09-30 23:33:22.382571] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:42.551 [2024-09-30 23:33:22.382594] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:42.551 [2024-09-30 23:33:22.382602] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:42.551 23:33:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:42.551 23:33:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:42.551 23:33:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:42.551 23:33:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:42.551 23:33:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:42.551 23:33:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:42.551 23:33:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:42.551 23:33:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:42.551 23:33:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:42.552 23:33:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:42.552 23:33:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:42.552 23:33:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:42.552 23:33:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:42.552 23:33:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:42.812 23:33:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:42.812 23:33:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:42.812 23:33:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:42.812 "name": "raid_bdev1", 00:15:42.812 "uuid": "ab42845d-961b-4add-9497-d403bf25a8dd", 00:15:42.812 "strip_size_kb": 0, 00:15:42.812 "state": "online", 00:15:42.812 "raid_level": "raid1", 00:15:42.812 "superblock": true, 00:15:42.812 "num_base_bdevs": 2, 00:15:42.812 "num_base_bdevs_discovered": 1, 00:15:42.812 "num_base_bdevs_operational": 1, 00:15:42.812 "base_bdevs_list": [ 00:15:42.812 { 00:15:42.812 "name": null, 00:15:42.812 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:42.812 "is_configured": false, 00:15:42.812 "data_offset": 0, 00:15:42.812 "data_size": 7936 00:15:42.812 }, 00:15:42.812 { 00:15:42.812 "name": "BaseBdev2", 00:15:42.812 "uuid": "a8557a25-eeba-5b94-a13d-a0d6473b73fe", 00:15:42.812 "is_configured": true, 00:15:42.812 "data_offset": 256, 00:15:42.812 "data_size": 7936 00:15:42.812 } 00:15:42.812 ] 00:15:42.812 }' 00:15:42.812 23:33:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:42.812 23:33:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:43.071 23:33:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:43.071 23:33:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:43.071 23:33:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:43.071 23:33:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:43.071 23:33:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:43.071 23:33:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:43.071 23:33:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:43.071 23:33:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:43.071 23:33:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:43.071 23:33:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:43.071 23:33:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:43.071 "name": "raid_bdev1", 00:15:43.071 "uuid": "ab42845d-961b-4add-9497-d403bf25a8dd", 00:15:43.071 "strip_size_kb": 0, 00:15:43.071 "state": "online", 00:15:43.071 "raid_level": "raid1", 00:15:43.071 "superblock": true, 00:15:43.071 "num_base_bdevs": 2, 00:15:43.071 "num_base_bdevs_discovered": 1, 00:15:43.071 "num_base_bdevs_operational": 1, 00:15:43.071 "base_bdevs_list": [ 00:15:43.071 { 00:15:43.071 "name": null, 00:15:43.071 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:43.071 "is_configured": false, 00:15:43.071 "data_offset": 0, 00:15:43.071 "data_size": 7936 00:15:43.071 }, 00:15:43.072 { 00:15:43.072 "name": "BaseBdev2", 00:15:43.072 "uuid": "a8557a25-eeba-5b94-a13d-a0d6473b73fe", 00:15:43.072 "is_configured": true, 00:15:43.072 "data_offset": 256, 00:15:43.072 "data_size": 7936 00:15:43.072 } 00:15:43.072 ] 00:15:43.072 }' 00:15:43.072 23:33:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:43.330 23:33:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:43.330 23:33:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:43.330 23:33:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:43.330 23:33:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:43.330 23:33:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:43.330 23:33:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:43.330 [2024-09-30 23:33:22.980639] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:43.330 [2024-09-30 23:33:22.986205] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d190 00:15:43.330 23:33:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:43.330 23:33:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@663 -- # sleep 1 00:15:43.331 [2024-09-30 23:33:22.988375] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:44.269 23:33:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:44.269 23:33:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:44.269 23:33:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:44.269 23:33:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:44.269 23:33:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:44.269 23:33:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:44.269 23:33:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:44.269 23:33:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:44.269 23:33:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:44.269 23:33:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:44.269 23:33:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:44.269 "name": "raid_bdev1", 00:15:44.269 "uuid": "ab42845d-961b-4add-9497-d403bf25a8dd", 00:15:44.269 "strip_size_kb": 0, 00:15:44.269 "state": "online", 00:15:44.269 "raid_level": "raid1", 00:15:44.269 "superblock": true, 00:15:44.269 "num_base_bdevs": 2, 00:15:44.269 "num_base_bdevs_discovered": 2, 00:15:44.269 "num_base_bdevs_operational": 2, 00:15:44.269 "process": { 00:15:44.269 "type": "rebuild", 00:15:44.269 "target": "spare", 00:15:44.269 "progress": { 00:15:44.269 "blocks": 2560, 00:15:44.269 "percent": 32 00:15:44.269 } 00:15:44.269 }, 00:15:44.269 "base_bdevs_list": [ 00:15:44.269 { 00:15:44.269 "name": "spare", 00:15:44.269 "uuid": "6bba25f7-e51e-56f5-b5da-c9e2f05260d0", 00:15:44.269 "is_configured": true, 00:15:44.269 "data_offset": 256, 00:15:44.269 "data_size": 7936 00:15:44.269 }, 00:15:44.269 { 00:15:44.269 "name": "BaseBdev2", 00:15:44.269 "uuid": "a8557a25-eeba-5b94-a13d-a0d6473b73fe", 00:15:44.269 "is_configured": true, 00:15:44.269 "data_offset": 256, 00:15:44.269 "data_size": 7936 00:15:44.269 } 00:15:44.269 ] 00:15:44.269 }' 00:15:44.269 23:33:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:44.269 23:33:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:44.269 23:33:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:44.269 23:33:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:44.269 23:33:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:15:44.269 23:33:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:15:44.528 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:15:44.528 23:33:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:15:44.528 23:33:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:15:44.528 23:33:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:15:44.528 23:33:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@706 -- # local timeout=565 00:15:44.528 23:33:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:44.528 23:33:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:44.528 23:33:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:44.528 23:33:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:44.528 23:33:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:44.528 23:33:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:44.528 23:33:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:44.528 23:33:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:44.528 23:33:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:44.528 23:33:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:44.528 23:33:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:44.528 23:33:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:44.528 "name": "raid_bdev1", 00:15:44.528 "uuid": "ab42845d-961b-4add-9497-d403bf25a8dd", 00:15:44.528 "strip_size_kb": 0, 00:15:44.528 "state": "online", 00:15:44.528 "raid_level": "raid1", 00:15:44.528 "superblock": true, 00:15:44.528 "num_base_bdevs": 2, 00:15:44.528 "num_base_bdevs_discovered": 2, 00:15:44.528 "num_base_bdevs_operational": 2, 00:15:44.528 "process": { 00:15:44.528 "type": "rebuild", 00:15:44.528 "target": "spare", 00:15:44.528 "progress": { 00:15:44.528 "blocks": 2816, 00:15:44.528 "percent": 35 00:15:44.528 } 00:15:44.528 }, 00:15:44.528 "base_bdevs_list": [ 00:15:44.528 { 00:15:44.528 "name": "spare", 00:15:44.528 "uuid": "6bba25f7-e51e-56f5-b5da-c9e2f05260d0", 00:15:44.528 "is_configured": true, 00:15:44.528 "data_offset": 256, 00:15:44.528 "data_size": 7936 00:15:44.528 }, 00:15:44.528 { 00:15:44.528 "name": "BaseBdev2", 00:15:44.528 "uuid": "a8557a25-eeba-5b94-a13d-a0d6473b73fe", 00:15:44.528 "is_configured": true, 00:15:44.528 "data_offset": 256, 00:15:44.528 "data_size": 7936 00:15:44.528 } 00:15:44.528 ] 00:15:44.528 }' 00:15:44.528 23:33:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:44.528 23:33:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:44.528 23:33:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:44.528 23:33:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:44.528 23:33:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:45.466 23:33:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:45.466 23:33:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:45.466 23:33:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:45.466 23:33:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:45.466 23:33:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:45.466 23:33:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:45.466 23:33:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:45.466 23:33:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:45.466 23:33:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:45.466 23:33:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:45.466 23:33:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:45.725 23:33:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:45.725 "name": "raid_bdev1", 00:15:45.725 "uuid": "ab42845d-961b-4add-9497-d403bf25a8dd", 00:15:45.725 "strip_size_kb": 0, 00:15:45.725 "state": "online", 00:15:45.725 "raid_level": "raid1", 00:15:45.725 "superblock": true, 00:15:45.725 "num_base_bdevs": 2, 00:15:45.725 "num_base_bdevs_discovered": 2, 00:15:45.725 "num_base_bdevs_operational": 2, 00:15:45.725 "process": { 00:15:45.725 "type": "rebuild", 00:15:45.725 "target": "spare", 00:15:45.725 "progress": { 00:15:45.725 "blocks": 5632, 00:15:45.725 "percent": 70 00:15:45.725 } 00:15:45.725 }, 00:15:45.725 "base_bdevs_list": [ 00:15:45.725 { 00:15:45.725 "name": "spare", 00:15:45.725 "uuid": "6bba25f7-e51e-56f5-b5da-c9e2f05260d0", 00:15:45.725 "is_configured": true, 00:15:45.725 "data_offset": 256, 00:15:45.725 "data_size": 7936 00:15:45.725 }, 00:15:45.725 { 00:15:45.725 "name": "BaseBdev2", 00:15:45.725 "uuid": "a8557a25-eeba-5b94-a13d-a0d6473b73fe", 00:15:45.725 "is_configured": true, 00:15:45.725 "data_offset": 256, 00:15:45.725 "data_size": 7936 00:15:45.725 } 00:15:45.725 ] 00:15:45.725 }' 00:15:45.725 23:33:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:45.725 23:33:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:45.725 23:33:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:45.725 23:33:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:45.725 23:33:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:46.292 [2024-09-30 23:33:26.107488] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:15:46.293 [2024-09-30 23:33:26.107641] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:15:46.293 [2024-09-30 23:33:26.107759] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:46.861 23:33:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:46.861 23:33:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:46.861 23:33:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:46.861 23:33:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:46.861 23:33:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:46.861 23:33:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:46.861 23:33:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:46.861 23:33:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:46.861 23:33:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:46.861 23:33:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:46.861 23:33:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:46.861 23:33:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:46.861 "name": "raid_bdev1", 00:15:46.861 "uuid": "ab42845d-961b-4add-9497-d403bf25a8dd", 00:15:46.861 "strip_size_kb": 0, 00:15:46.861 "state": "online", 00:15:46.861 "raid_level": "raid1", 00:15:46.861 "superblock": true, 00:15:46.861 "num_base_bdevs": 2, 00:15:46.861 "num_base_bdevs_discovered": 2, 00:15:46.861 "num_base_bdevs_operational": 2, 00:15:46.861 "base_bdevs_list": [ 00:15:46.861 { 00:15:46.861 "name": "spare", 00:15:46.861 "uuid": "6bba25f7-e51e-56f5-b5da-c9e2f05260d0", 00:15:46.861 "is_configured": true, 00:15:46.861 "data_offset": 256, 00:15:46.861 "data_size": 7936 00:15:46.861 }, 00:15:46.861 { 00:15:46.861 "name": "BaseBdev2", 00:15:46.861 "uuid": "a8557a25-eeba-5b94-a13d-a0d6473b73fe", 00:15:46.861 "is_configured": true, 00:15:46.861 "data_offset": 256, 00:15:46.861 "data_size": 7936 00:15:46.861 } 00:15:46.861 ] 00:15:46.861 }' 00:15:46.861 23:33:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:46.861 23:33:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:15:46.861 23:33:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:46.861 23:33:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:15:46.861 23:33:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@709 -- # break 00:15:46.861 23:33:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:46.861 23:33:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:46.861 23:33:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:46.861 23:33:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:46.861 23:33:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:46.861 23:33:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:46.861 23:33:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:46.861 23:33:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:46.861 23:33:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:46.861 23:33:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:46.861 23:33:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:46.861 "name": "raid_bdev1", 00:15:46.861 "uuid": "ab42845d-961b-4add-9497-d403bf25a8dd", 00:15:46.861 "strip_size_kb": 0, 00:15:46.861 "state": "online", 00:15:46.861 "raid_level": "raid1", 00:15:46.861 "superblock": true, 00:15:46.861 "num_base_bdevs": 2, 00:15:46.861 "num_base_bdevs_discovered": 2, 00:15:46.861 "num_base_bdevs_operational": 2, 00:15:46.861 "base_bdevs_list": [ 00:15:46.861 { 00:15:46.861 "name": "spare", 00:15:46.861 "uuid": "6bba25f7-e51e-56f5-b5da-c9e2f05260d0", 00:15:46.861 "is_configured": true, 00:15:46.861 "data_offset": 256, 00:15:46.861 "data_size": 7936 00:15:46.861 }, 00:15:46.861 { 00:15:46.861 "name": "BaseBdev2", 00:15:46.861 "uuid": "a8557a25-eeba-5b94-a13d-a0d6473b73fe", 00:15:46.861 "is_configured": true, 00:15:46.861 "data_offset": 256, 00:15:46.861 "data_size": 7936 00:15:46.861 } 00:15:46.861 ] 00:15:46.861 }' 00:15:46.861 23:33:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:46.861 23:33:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:46.861 23:33:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:47.121 23:33:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:47.121 23:33:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:47.121 23:33:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:47.121 23:33:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:47.121 23:33:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:47.121 23:33:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:47.121 23:33:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:47.121 23:33:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:47.121 23:33:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:47.121 23:33:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:47.121 23:33:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:47.121 23:33:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:47.121 23:33:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:47.121 23:33:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:47.121 23:33:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:47.121 23:33:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:47.121 23:33:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:47.121 "name": "raid_bdev1", 00:15:47.121 "uuid": "ab42845d-961b-4add-9497-d403bf25a8dd", 00:15:47.121 "strip_size_kb": 0, 00:15:47.121 "state": "online", 00:15:47.121 "raid_level": "raid1", 00:15:47.121 "superblock": true, 00:15:47.121 "num_base_bdevs": 2, 00:15:47.121 "num_base_bdevs_discovered": 2, 00:15:47.121 "num_base_bdevs_operational": 2, 00:15:47.121 "base_bdevs_list": [ 00:15:47.121 { 00:15:47.121 "name": "spare", 00:15:47.121 "uuid": "6bba25f7-e51e-56f5-b5da-c9e2f05260d0", 00:15:47.121 "is_configured": true, 00:15:47.121 "data_offset": 256, 00:15:47.121 "data_size": 7936 00:15:47.121 }, 00:15:47.121 { 00:15:47.121 "name": "BaseBdev2", 00:15:47.121 "uuid": "a8557a25-eeba-5b94-a13d-a0d6473b73fe", 00:15:47.121 "is_configured": true, 00:15:47.121 "data_offset": 256, 00:15:47.121 "data_size": 7936 00:15:47.121 } 00:15:47.121 ] 00:15:47.121 }' 00:15:47.121 23:33:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:47.121 23:33:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:47.380 23:33:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:47.380 23:33:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:47.380 23:33:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:47.380 [2024-09-30 23:33:27.195795] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:47.380 [2024-09-30 23:33:27.195877] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:47.380 [2024-09-30 23:33:27.195995] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:47.380 [2024-09-30 23:33:27.196079] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:47.380 [2024-09-30 23:33:27.196127] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:15:47.380 23:33:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:47.380 23:33:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:47.380 23:33:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:47.380 23:33:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:47.380 23:33:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # jq length 00:15:47.380 23:33:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:47.639 23:33:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:15:47.639 23:33:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:15:47.639 23:33:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:15:47.639 23:33:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:15:47.639 23:33:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:47.639 23:33:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:15:47.639 23:33:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:47.639 23:33:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:47.639 23:33:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:47.639 23:33:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@12 -- # local i 00:15:47.639 23:33:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:47.639 23:33:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:47.639 23:33:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:15:47.639 /dev/nbd0 00:15:47.639 23:33:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:47.639 23:33:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:47.639 23:33:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:15:47.640 23:33:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@869 -- # local i 00:15:47.640 23:33:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:15:47.640 23:33:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:15:47.640 23:33:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:15:47.898 23:33:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # break 00:15:47.898 23:33:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:15:47.898 23:33:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:15:47.898 23:33:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:47.898 1+0 records in 00:15:47.898 1+0 records out 00:15:47.898 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000368203 s, 11.1 MB/s 00:15:47.898 23:33:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:47.898 23:33:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # size=4096 00:15:47.898 23:33:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:47.898 23:33:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:15:47.898 23:33:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # return 0 00:15:47.898 23:33:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:47.898 23:33:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:47.898 23:33:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:15:47.898 /dev/nbd1 00:15:47.898 23:33:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:15:47.898 23:33:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:15:47.898 23:33:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:15:47.898 23:33:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@869 -- # local i 00:15:47.898 23:33:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:15:47.898 23:33:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:15:47.898 23:33:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:15:47.898 23:33:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # break 00:15:47.898 23:33:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:15:47.898 23:33:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:15:47.898 23:33:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:47.898 1+0 records in 00:15:47.898 1+0 records out 00:15:47.898 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000387331 s, 10.6 MB/s 00:15:47.898 23:33:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:48.157 23:33:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # size=4096 00:15:48.157 23:33:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:48.157 23:33:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:15:48.157 23:33:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # return 0 00:15:48.157 23:33:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:48.157 23:33:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:48.158 23:33:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:15:48.158 23:33:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:15:48.158 23:33:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:48.158 23:33:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:48.158 23:33:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:48.158 23:33:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@51 -- # local i 00:15:48.158 23:33:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:48.158 23:33:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:48.416 23:33:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:48.416 23:33:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:48.416 23:33:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:48.416 23:33:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:48.416 23:33:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:48.416 23:33:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:48.416 23:33:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:15:48.416 23:33:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:15:48.416 23:33:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:48.416 23:33:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:15:48.416 23:33:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:48.416 23:33:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:48.416 23:33:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:48.416 23:33:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:48.416 23:33:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:48.416 23:33:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:48.416 23:33:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:15:48.416 23:33:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:15:48.416 23:33:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:15:48.416 23:33:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:15:48.416 23:33:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:48.416 23:33:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:48.416 23:33:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:48.416 23:33:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:48.416 23:33:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:48.416 23:33:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:48.416 [2024-09-30 23:33:28.256305] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:48.416 [2024-09-30 23:33:28.256422] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:48.416 [2024-09-30 23:33:28.256447] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:15:48.416 [2024-09-30 23:33:28.256469] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:48.416 [2024-09-30 23:33:28.258993] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:48.416 [2024-09-30 23:33:28.259034] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:48.416 [2024-09-30 23:33:28.259123] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:15:48.416 [2024-09-30 23:33:28.259176] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:48.416 [2024-09-30 23:33:28.259307] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:48.416 spare 00:15:48.416 23:33:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:48.416 23:33:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:15:48.416 23:33:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:48.417 23:33:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:48.675 [2024-09-30 23:33:28.359211] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006600 00:15:48.675 [2024-09-30 23:33:28.359292] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:15:48.675 [2024-09-30 23:33:28.359617] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c19b0 00:15:48.675 [2024-09-30 23:33:28.359814] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006600 00:15:48.675 [2024-09-30 23:33:28.359870] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006600 00:15:48.675 [2024-09-30 23:33:28.360024] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:48.675 23:33:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:48.675 23:33:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:48.675 23:33:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:48.676 23:33:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:48.676 23:33:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:48.676 23:33:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:48.676 23:33:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:48.676 23:33:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:48.676 23:33:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:48.676 23:33:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:48.676 23:33:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:48.676 23:33:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:48.676 23:33:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:48.676 23:33:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:48.676 23:33:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:48.676 23:33:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:48.676 23:33:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:48.676 "name": "raid_bdev1", 00:15:48.676 "uuid": "ab42845d-961b-4add-9497-d403bf25a8dd", 00:15:48.676 "strip_size_kb": 0, 00:15:48.676 "state": "online", 00:15:48.676 "raid_level": "raid1", 00:15:48.676 "superblock": true, 00:15:48.676 "num_base_bdevs": 2, 00:15:48.676 "num_base_bdevs_discovered": 2, 00:15:48.676 "num_base_bdevs_operational": 2, 00:15:48.676 "base_bdevs_list": [ 00:15:48.676 { 00:15:48.676 "name": "spare", 00:15:48.676 "uuid": "6bba25f7-e51e-56f5-b5da-c9e2f05260d0", 00:15:48.676 "is_configured": true, 00:15:48.676 "data_offset": 256, 00:15:48.676 "data_size": 7936 00:15:48.676 }, 00:15:48.676 { 00:15:48.676 "name": "BaseBdev2", 00:15:48.676 "uuid": "a8557a25-eeba-5b94-a13d-a0d6473b73fe", 00:15:48.676 "is_configured": true, 00:15:48.676 "data_offset": 256, 00:15:48.676 "data_size": 7936 00:15:48.676 } 00:15:48.676 ] 00:15:48.676 }' 00:15:48.676 23:33:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:48.676 23:33:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:49.245 23:33:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:49.245 23:33:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:49.245 23:33:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:49.245 23:33:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:49.245 23:33:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:49.245 23:33:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:49.245 23:33:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:49.245 23:33:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:49.245 23:33:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:49.245 23:33:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:49.245 23:33:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:49.245 "name": "raid_bdev1", 00:15:49.245 "uuid": "ab42845d-961b-4add-9497-d403bf25a8dd", 00:15:49.245 "strip_size_kb": 0, 00:15:49.245 "state": "online", 00:15:49.245 "raid_level": "raid1", 00:15:49.245 "superblock": true, 00:15:49.245 "num_base_bdevs": 2, 00:15:49.245 "num_base_bdevs_discovered": 2, 00:15:49.245 "num_base_bdevs_operational": 2, 00:15:49.245 "base_bdevs_list": [ 00:15:49.245 { 00:15:49.245 "name": "spare", 00:15:49.245 "uuid": "6bba25f7-e51e-56f5-b5da-c9e2f05260d0", 00:15:49.245 "is_configured": true, 00:15:49.245 "data_offset": 256, 00:15:49.245 "data_size": 7936 00:15:49.245 }, 00:15:49.245 { 00:15:49.245 "name": "BaseBdev2", 00:15:49.245 "uuid": "a8557a25-eeba-5b94-a13d-a0d6473b73fe", 00:15:49.245 "is_configured": true, 00:15:49.245 "data_offset": 256, 00:15:49.245 "data_size": 7936 00:15:49.245 } 00:15:49.245 ] 00:15:49.245 }' 00:15:49.245 23:33:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:49.245 23:33:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:49.245 23:33:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:49.245 23:33:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:49.245 23:33:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:49.245 23:33:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:49.245 23:33:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:49.245 23:33:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:15:49.245 23:33:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:49.245 23:33:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:15:49.245 23:33:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:15:49.245 23:33:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:49.245 23:33:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:49.245 [2024-09-30 23:33:28.999203] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:49.245 23:33:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:49.245 23:33:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:49.245 23:33:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:49.245 23:33:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:49.245 23:33:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:49.245 23:33:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:49.245 23:33:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:49.245 23:33:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:49.245 23:33:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:49.245 23:33:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:49.245 23:33:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:49.245 23:33:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:49.245 23:33:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:49.245 23:33:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:49.245 23:33:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:49.245 23:33:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:49.246 23:33:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:49.246 "name": "raid_bdev1", 00:15:49.246 "uuid": "ab42845d-961b-4add-9497-d403bf25a8dd", 00:15:49.246 "strip_size_kb": 0, 00:15:49.246 "state": "online", 00:15:49.246 "raid_level": "raid1", 00:15:49.246 "superblock": true, 00:15:49.246 "num_base_bdevs": 2, 00:15:49.246 "num_base_bdevs_discovered": 1, 00:15:49.246 "num_base_bdevs_operational": 1, 00:15:49.246 "base_bdevs_list": [ 00:15:49.246 { 00:15:49.246 "name": null, 00:15:49.246 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:49.246 "is_configured": false, 00:15:49.246 "data_offset": 0, 00:15:49.246 "data_size": 7936 00:15:49.246 }, 00:15:49.246 { 00:15:49.246 "name": "BaseBdev2", 00:15:49.246 "uuid": "a8557a25-eeba-5b94-a13d-a0d6473b73fe", 00:15:49.246 "is_configured": true, 00:15:49.246 "data_offset": 256, 00:15:49.246 "data_size": 7936 00:15:49.246 } 00:15:49.246 ] 00:15:49.246 }' 00:15:49.246 23:33:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:49.246 23:33:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:49.818 23:33:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:49.818 23:33:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:49.818 23:33:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:49.818 [2024-09-30 23:33:29.430539] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:49.818 [2024-09-30 23:33:29.430737] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:15:49.818 [2024-09-30 23:33:29.430794] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:15:49.818 [2024-09-30 23:33:29.430865] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:49.818 [2024-09-30 23:33:29.438060] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1a80 00:15:49.818 23:33:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:49.818 23:33:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@757 -- # sleep 1 00:15:49.818 [2024-09-30 23:33:29.440211] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:50.805 23:33:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:50.805 23:33:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:50.805 23:33:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:50.805 23:33:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:50.805 23:33:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:50.805 23:33:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:50.805 23:33:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:50.805 23:33:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:50.805 23:33:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:50.805 23:33:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:50.805 23:33:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:50.805 "name": "raid_bdev1", 00:15:50.805 "uuid": "ab42845d-961b-4add-9497-d403bf25a8dd", 00:15:50.805 "strip_size_kb": 0, 00:15:50.805 "state": "online", 00:15:50.805 "raid_level": "raid1", 00:15:50.805 "superblock": true, 00:15:50.805 "num_base_bdevs": 2, 00:15:50.805 "num_base_bdevs_discovered": 2, 00:15:50.805 "num_base_bdevs_operational": 2, 00:15:50.805 "process": { 00:15:50.805 "type": "rebuild", 00:15:50.805 "target": "spare", 00:15:50.805 "progress": { 00:15:50.805 "blocks": 2560, 00:15:50.805 "percent": 32 00:15:50.805 } 00:15:50.805 }, 00:15:50.805 "base_bdevs_list": [ 00:15:50.805 { 00:15:50.805 "name": "spare", 00:15:50.805 "uuid": "6bba25f7-e51e-56f5-b5da-c9e2f05260d0", 00:15:50.805 "is_configured": true, 00:15:50.805 "data_offset": 256, 00:15:50.805 "data_size": 7936 00:15:50.805 }, 00:15:50.805 { 00:15:50.805 "name": "BaseBdev2", 00:15:50.805 "uuid": "a8557a25-eeba-5b94-a13d-a0d6473b73fe", 00:15:50.805 "is_configured": true, 00:15:50.805 "data_offset": 256, 00:15:50.805 "data_size": 7936 00:15:50.805 } 00:15:50.805 ] 00:15:50.805 }' 00:15:50.805 23:33:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:50.805 23:33:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:50.805 23:33:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:50.805 23:33:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:50.805 23:33:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:15:50.805 23:33:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:50.805 23:33:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:50.805 [2024-09-30 23:33:30.604025] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:50.805 [2024-09-30 23:33:30.647617] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:50.805 [2024-09-30 23:33:30.647720] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:50.805 [2024-09-30 23:33:30.647757] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:50.805 [2024-09-30 23:33:30.647780] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:51.064 23:33:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.064 23:33:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:51.064 23:33:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:51.064 23:33:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:51.064 23:33:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:51.064 23:33:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:51.064 23:33:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:51.064 23:33:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:51.064 23:33:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:51.064 23:33:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:51.064 23:33:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:51.064 23:33:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:51.065 23:33:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:51.065 23:33:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.065 23:33:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:51.065 23:33:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.065 23:33:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:51.065 "name": "raid_bdev1", 00:15:51.065 "uuid": "ab42845d-961b-4add-9497-d403bf25a8dd", 00:15:51.065 "strip_size_kb": 0, 00:15:51.065 "state": "online", 00:15:51.065 "raid_level": "raid1", 00:15:51.065 "superblock": true, 00:15:51.065 "num_base_bdevs": 2, 00:15:51.065 "num_base_bdevs_discovered": 1, 00:15:51.065 "num_base_bdevs_operational": 1, 00:15:51.065 "base_bdevs_list": [ 00:15:51.065 { 00:15:51.065 "name": null, 00:15:51.065 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:51.065 "is_configured": false, 00:15:51.065 "data_offset": 0, 00:15:51.065 "data_size": 7936 00:15:51.065 }, 00:15:51.065 { 00:15:51.065 "name": "BaseBdev2", 00:15:51.065 "uuid": "a8557a25-eeba-5b94-a13d-a0d6473b73fe", 00:15:51.065 "is_configured": true, 00:15:51.065 "data_offset": 256, 00:15:51.065 "data_size": 7936 00:15:51.065 } 00:15:51.065 ] 00:15:51.065 }' 00:15:51.065 23:33:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:51.065 23:33:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:51.324 23:33:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:51.324 23:33:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.324 23:33:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:51.324 [2024-09-30 23:33:31.121496] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:51.324 [2024-09-30 23:33:31.121598] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:51.324 [2024-09-30 23:33:31.121636] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:15:51.324 [2024-09-30 23:33:31.121663] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:51.324 [2024-09-30 23:33:31.122164] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:51.324 [2024-09-30 23:33:31.122221] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:51.324 [2024-09-30 23:33:31.122333] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:15:51.324 [2024-09-30 23:33:31.122368] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:15:51.324 [2024-09-30 23:33:31.122414] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:15:51.324 [2024-09-30 23:33:31.122457] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:51.324 spare 00:15:51.324 [2024-09-30 23:33:31.127710] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1b50 00:15:51.324 23:33:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.324 23:33:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@764 -- # sleep 1 00:15:51.324 [2024-09-30 23:33:31.129773] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:52.703 23:33:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:52.703 23:33:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:52.703 23:33:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:52.703 23:33:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:52.703 23:33:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:52.703 23:33:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:52.703 23:33:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.703 23:33:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:52.703 23:33:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:52.703 23:33:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.703 23:33:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:52.703 "name": "raid_bdev1", 00:15:52.703 "uuid": "ab42845d-961b-4add-9497-d403bf25a8dd", 00:15:52.703 "strip_size_kb": 0, 00:15:52.703 "state": "online", 00:15:52.703 "raid_level": "raid1", 00:15:52.703 "superblock": true, 00:15:52.703 "num_base_bdevs": 2, 00:15:52.703 "num_base_bdevs_discovered": 2, 00:15:52.703 "num_base_bdevs_operational": 2, 00:15:52.703 "process": { 00:15:52.703 "type": "rebuild", 00:15:52.703 "target": "spare", 00:15:52.703 "progress": { 00:15:52.703 "blocks": 2560, 00:15:52.703 "percent": 32 00:15:52.703 } 00:15:52.703 }, 00:15:52.703 "base_bdevs_list": [ 00:15:52.703 { 00:15:52.703 "name": "spare", 00:15:52.703 "uuid": "6bba25f7-e51e-56f5-b5da-c9e2f05260d0", 00:15:52.703 "is_configured": true, 00:15:52.703 "data_offset": 256, 00:15:52.703 "data_size": 7936 00:15:52.703 }, 00:15:52.703 { 00:15:52.703 "name": "BaseBdev2", 00:15:52.703 "uuid": "a8557a25-eeba-5b94-a13d-a0d6473b73fe", 00:15:52.703 "is_configured": true, 00:15:52.703 "data_offset": 256, 00:15:52.703 "data_size": 7936 00:15:52.703 } 00:15:52.703 ] 00:15:52.703 }' 00:15:52.703 23:33:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:52.703 23:33:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:52.703 23:33:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:52.703 23:33:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:52.704 23:33:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:15:52.704 23:33:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.704 23:33:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:52.704 [2024-09-30 23:33:32.293675] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:52.704 [2024-09-30 23:33:32.337283] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:52.704 [2024-09-30 23:33:32.337393] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:52.704 [2024-09-30 23:33:32.337428] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:52.704 [2024-09-30 23:33:32.337451] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:52.704 23:33:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.704 23:33:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:52.704 23:33:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:52.704 23:33:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:52.704 23:33:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:52.704 23:33:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:52.704 23:33:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:52.704 23:33:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:52.704 23:33:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:52.704 23:33:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:52.704 23:33:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:52.704 23:33:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:52.704 23:33:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:52.704 23:33:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.704 23:33:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:52.704 23:33:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.704 23:33:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:52.704 "name": "raid_bdev1", 00:15:52.704 "uuid": "ab42845d-961b-4add-9497-d403bf25a8dd", 00:15:52.704 "strip_size_kb": 0, 00:15:52.704 "state": "online", 00:15:52.704 "raid_level": "raid1", 00:15:52.704 "superblock": true, 00:15:52.704 "num_base_bdevs": 2, 00:15:52.704 "num_base_bdevs_discovered": 1, 00:15:52.704 "num_base_bdevs_operational": 1, 00:15:52.704 "base_bdevs_list": [ 00:15:52.704 { 00:15:52.704 "name": null, 00:15:52.704 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:52.704 "is_configured": false, 00:15:52.704 "data_offset": 0, 00:15:52.704 "data_size": 7936 00:15:52.704 }, 00:15:52.704 { 00:15:52.704 "name": "BaseBdev2", 00:15:52.704 "uuid": "a8557a25-eeba-5b94-a13d-a0d6473b73fe", 00:15:52.704 "is_configured": true, 00:15:52.704 "data_offset": 256, 00:15:52.704 "data_size": 7936 00:15:52.704 } 00:15:52.704 ] 00:15:52.704 }' 00:15:52.704 23:33:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:52.704 23:33:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:52.964 23:33:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:52.964 23:33:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:52.964 23:33:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:52.964 23:33:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:52.964 23:33:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:52.964 23:33:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:52.964 23:33:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:52.964 23:33:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.964 23:33:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:52.964 23:33:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.964 23:33:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:52.964 "name": "raid_bdev1", 00:15:52.964 "uuid": "ab42845d-961b-4add-9497-d403bf25a8dd", 00:15:52.964 "strip_size_kb": 0, 00:15:52.964 "state": "online", 00:15:52.964 "raid_level": "raid1", 00:15:52.964 "superblock": true, 00:15:52.964 "num_base_bdevs": 2, 00:15:52.964 "num_base_bdevs_discovered": 1, 00:15:52.964 "num_base_bdevs_operational": 1, 00:15:52.964 "base_bdevs_list": [ 00:15:52.964 { 00:15:52.964 "name": null, 00:15:52.964 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:52.964 "is_configured": false, 00:15:52.964 "data_offset": 0, 00:15:52.964 "data_size": 7936 00:15:52.964 }, 00:15:52.964 { 00:15:52.964 "name": "BaseBdev2", 00:15:52.964 "uuid": "a8557a25-eeba-5b94-a13d-a0d6473b73fe", 00:15:52.964 "is_configured": true, 00:15:52.964 "data_offset": 256, 00:15:52.964 "data_size": 7936 00:15:52.964 } 00:15:52.964 ] 00:15:52.964 }' 00:15:52.964 23:33:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:53.223 23:33:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:53.223 23:33:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:53.223 23:33:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:53.223 23:33:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:15:53.223 23:33:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:53.223 23:33:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:53.223 23:33:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:53.223 23:33:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:15:53.223 23:33:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:53.223 23:33:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:53.223 [2024-09-30 23:33:32.883393] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:15:53.223 [2024-09-30 23:33:32.883488] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:53.223 [2024-09-30 23:33:32.883534] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:15:53.223 [2024-09-30 23:33:32.883564] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:53.223 [2024-09-30 23:33:32.884042] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:53.223 [2024-09-30 23:33:32.884111] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:53.223 [2024-09-30 23:33:32.884210] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:15:53.223 [2024-09-30 23:33:32.884258] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:15:53.223 [2024-09-30 23:33:32.884294] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:15:53.223 [2024-09-30 23:33:32.884338] bdev_raid.c:3884:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:15:53.223 BaseBdev1 00:15:53.223 23:33:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:53.223 23:33:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@775 -- # sleep 1 00:15:54.159 23:33:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:54.159 23:33:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:54.159 23:33:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:54.159 23:33:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:54.159 23:33:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:54.159 23:33:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:54.159 23:33:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:54.159 23:33:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:54.159 23:33:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:54.159 23:33:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:54.159 23:33:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:54.159 23:33:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:54.159 23:33:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.159 23:33:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:54.159 23:33:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.159 23:33:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:54.159 "name": "raid_bdev1", 00:15:54.159 "uuid": "ab42845d-961b-4add-9497-d403bf25a8dd", 00:15:54.159 "strip_size_kb": 0, 00:15:54.159 "state": "online", 00:15:54.159 "raid_level": "raid1", 00:15:54.159 "superblock": true, 00:15:54.159 "num_base_bdevs": 2, 00:15:54.159 "num_base_bdevs_discovered": 1, 00:15:54.159 "num_base_bdevs_operational": 1, 00:15:54.159 "base_bdevs_list": [ 00:15:54.159 { 00:15:54.159 "name": null, 00:15:54.159 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:54.159 "is_configured": false, 00:15:54.159 "data_offset": 0, 00:15:54.159 "data_size": 7936 00:15:54.159 }, 00:15:54.159 { 00:15:54.159 "name": "BaseBdev2", 00:15:54.159 "uuid": "a8557a25-eeba-5b94-a13d-a0d6473b73fe", 00:15:54.159 "is_configured": true, 00:15:54.159 "data_offset": 256, 00:15:54.159 "data_size": 7936 00:15:54.159 } 00:15:54.159 ] 00:15:54.159 }' 00:15:54.159 23:33:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:54.159 23:33:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:54.727 23:33:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:54.727 23:33:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:54.727 23:33:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:54.727 23:33:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:54.727 23:33:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:54.727 23:33:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:54.727 23:33:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:54.727 23:33:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.727 23:33:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:54.727 23:33:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.727 23:33:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:54.727 "name": "raid_bdev1", 00:15:54.727 "uuid": "ab42845d-961b-4add-9497-d403bf25a8dd", 00:15:54.727 "strip_size_kb": 0, 00:15:54.727 "state": "online", 00:15:54.727 "raid_level": "raid1", 00:15:54.727 "superblock": true, 00:15:54.727 "num_base_bdevs": 2, 00:15:54.727 "num_base_bdevs_discovered": 1, 00:15:54.727 "num_base_bdevs_operational": 1, 00:15:54.727 "base_bdevs_list": [ 00:15:54.727 { 00:15:54.727 "name": null, 00:15:54.727 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:54.727 "is_configured": false, 00:15:54.727 "data_offset": 0, 00:15:54.727 "data_size": 7936 00:15:54.727 }, 00:15:54.727 { 00:15:54.727 "name": "BaseBdev2", 00:15:54.727 "uuid": "a8557a25-eeba-5b94-a13d-a0d6473b73fe", 00:15:54.727 "is_configured": true, 00:15:54.727 "data_offset": 256, 00:15:54.727 "data_size": 7936 00:15:54.727 } 00:15:54.727 ] 00:15:54.727 }' 00:15:54.727 23:33:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:54.727 23:33:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:54.727 23:33:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:54.727 23:33:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:54.727 23:33:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:54.727 23:33:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@650 -- # local es=0 00:15:54.727 23:33:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:54.727 23:33:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:15:54.727 23:33:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:54.727 23:33:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:15:54.727 23:33:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:54.727 23:33:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:54.727 23:33:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.727 23:33:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:54.727 [2024-09-30 23:33:34.452701] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:54.727 [2024-09-30 23:33:34.452887] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:15:54.727 [2024-09-30 23:33:34.452941] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:15:54.727 request: 00:15:54.727 { 00:15:54.727 "base_bdev": "BaseBdev1", 00:15:54.727 "raid_bdev": "raid_bdev1", 00:15:54.727 "method": "bdev_raid_add_base_bdev", 00:15:54.727 "req_id": 1 00:15:54.727 } 00:15:54.727 Got JSON-RPC error response 00:15:54.727 response: 00:15:54.727 { 00:15:54.727 "code": -22, 00:15:54.727 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:15:54.727 } 00:15:54.727 23:33:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:15:54.727 23:33:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@653 -- # es=1 00:15:54.727 23:33:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:54.727 23:33:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:54.727 23:33:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:54.727 23:33:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@779 -- # sleep 1 00:15:55.663 23:33:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:55.663 23:33:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:55.663 23:33:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:55.663 23:33:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:55.663 23:33:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:55.663 23:33:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:55.663 23:33:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:55.663 23:33:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:55.663 23:33:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:55.663 23:33:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:55.663 23:33:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:55.663 23:33:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:55.663 23:33:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.663 23:33:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:55.663 23:33:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.922 23:33:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:55.922 "name": "raid_bdev1", 00:15:55.922 "uuid": "ab42845d-961b-4add-9497-d403bf25a8dd", 00:15:55.922 "strip_size_kb": 0, 00:15:55.922 "state": "online", 00:15:55.922 "raid_level": "raid1", 00:15:55.922 "superblock": true, 00:15:55.922 "num_base_bdevs": 2, 00:15:55.922 "num_base_bdevs_discovered": 1, 00:15:55.922 "num_base_bdevs_operational": 1, 00:15:55.922 "base_bdevs_list": [ 00:15:55.922 { 00:15:55.922 "name": null, 00:15:55.922 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:55.922 "is_configured": false, 00:15:55.922 "data_offset": 0, 00:15:55.922 "data_size": 7936 00:15:55.922 }, 00:15:55.922 { 00:15:55.922 "name": "BaseBdev2", 00:15:55.922 "uuid": "a8557a25-eeba-5b94-a13d-a0d6473b73fe", 00:15:55.922 "is_configured": true, 00:15:55.922 "data_offset": 256, 00:15:55.922 "data_size": 7936 00:15:55.922 } 00:15:55.922 ] 00:15:55.922 }' 00:15:55.922 23:33:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:55.922 23:33:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:56.181 23:33:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:56.181 23:33:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:56.181 23:33:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:56.181 23:33:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:56.181 23:33:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:56.181 23:33:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:56.181 23:33:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:56.181 23:33:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:56.181 23:33:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:56.181 23:33:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:56.181 23:33:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:56.181 "name": "raid_bdev1", 00:15:56.181 "uuid": "ab42845d-961b-4add-9497-d403bf25a8dd", 00:15:56.181 "strip_size_kb": 0, 00:15:56.181 "state": "online", 00:15:56.181 "raid_level": "raid1", 00:15:56.181 "superblock": true, 00:15:56.181 "num_base_bdevs": 2, 00:15:56.181 "num_base_bdevs_discovered": 1, 00:15:56.181 "num_base_bdevs_operational": 1, 00:15:56.181 "base_bdevs_list": [ 00:15:56.181 { 00:15:56.181 "name": null, 00:15:56.181 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:56.181 "is_configured": false, 00:15:56.181 "data_offset": 0, 00:15:56.181 "data_size": 7936 00:15:56.181 }, 00:15:56.181 { 00:15:56.181 "name": "BaseBdev2", 00:15:56.181 "uuid": "a8557a25-eeba-5b94-a13d-a0d6473b73fe", 00:15:56.181 "is_configured": true, 00:15:56.181 "data_offset": 256, 00:15:56.181 "data_size": 7936 00:15:56.181 } 00:15:56.181 ] 00:15:56.181 }' 00:15:56.181 23:33:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:56.181 23:33:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:56.181 23:33:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:56.440 23:33:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:56.440 23:33:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@784 -- # killprocess 96920 00:15:56.440 23:33:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@950 -- # '[' -z 96920 ']' 00:15:56.440 23:33:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@954 -- # kill -0 96920 00:15:56.440 23:33:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@955 -- # uname 00:15:56.440 23:33:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:56.440 23:33:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 96920 00:15:56.440 killing process with pid 96920 00:15:56.440 Received shutdown signal, test time was about 60.000000 seconds 00:15:56.440 00:15:56.440 Latency(us) 00:15:56.440 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:56.440 =================================================================================================================== 00:15:56.440 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:15:56.440 23:33:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:56.440 23:33:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:56.440 23:33:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@968 -- # echo 'killing process with pid 96920' 00:15:56.440 23:33:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@969 -- # kill 96920 00:15:56.440 [2024-09-30 23:33:36.110453] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:56.440 [2024-09-30 23:33:36.110580] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:56.440 23:33:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@974 -- # wait 96920 00:15:56.441 [2024-09-30 23:33:36.110632] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:56.441 [2024-09-30 23:33:36.110642] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state offline 00:15:56.441 [2024-09-30 23:33:36.167631] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:56.700 23:33:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@786 -- # return 0 00:15:56.700 00:15:56.700 real 0m18.511s 00:15:56.700 user 0m24.385s 00:15:56.700 sys 0m2.737s 00:15:56.700 23:33:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:56.700 23:33:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:56.700 ************************************ 00:15:56.700 END TEST raid_rebuild_test_sb_4k 00:15:56.700 ************************************ 00:15:56.960 23:33:36 bdev_raid -- bdev/bdev_raid.sh@1003 -- # base_malloc_params='-m 32' 00:15:56.960 23:33:36 bdev_raid -- bdev/bdev_raid.sh@1004 -- # run_test raid_state_function_test_sb_md_separate raid_state_function_test raid1 2 true 00:15:56.960 23:33:36 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:15:56.960 23:33:36 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:56.960 23:33:36 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:56.960 ************************************ 00:15:56.960 START TEST raid_state_function_test_sb_md_separate 00:15:56.960 ************************************ 00:15:56.960 23:33:36 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 2 true 00:15:56.960 23:33:36 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:15:56.960 23:33:36 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:15:56.960 23:33:36 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:15:56.960 23:33:36 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:15:56.960 23:33:36 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:15:56.960 23:33:36 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:56.960 23:33:36 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:15:56.960 23:33:36 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:56.960 23:33:36 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:56.960 23:33:36 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:15:56.960 23:33:36 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:56.960 23:33:36 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:56.960 23:33:36 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:15:56.960 23:33:36 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:15:56.960 23:33:36 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:15:56.960 23:33:36 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # local strip_size 00:15:56.960 23:33:36 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:15:56.960 23:33:36 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:15:56.960 23:33:36 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:15:56.960 23:33:36 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:15:56.960 23:33:36 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:15:56.960 23:33:36 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:15:56.960 Process raid pid: 97594 00:15:56.960 23:33:36 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@229 -- # raid_pid=97594 00:15:56.960 23:33:36 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:15:56.960 23:33:36 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 97594' 00:15:56.960 23:33:36 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@231 -- # waitforlisten 97594 00:15:56.960 23:33:36 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@831 -- # '[' -z 97594 ']' 00:15:56.960 23:33:36 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:56.960 23:33:36 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:56.960 23:33:36 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:56.960 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:56.960 23:33:36 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:56.960 23:33:36 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:56.960 [2024-09-30 23:33:36.706848] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:15:56.960 [2024-09-30 23:33:36.707522] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:57.220 [2024-09-30 23:33:36.869919] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:57.220 [2024-09-30 23:33:36.939085] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:15:57.220 [2024-09-30 23:33:37.016021] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:57.220 [2024-09-30 23:33:37.016144] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:57.789 23:33:37 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:57.789 23:33:37 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@864 -- # return 0 00:15:57.789 23:33:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:15:57.789 23:33:37 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:57.789 23:33:37 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:57.789 [2024-09-30 23:33:37.520045] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:57.789 [2024-09-30 23:33:37.520167] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:57.789 [2024-09-30 23:33:37.520185] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:57.789 [2024-09-30 23:33:37.520195] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:57.789 23:33:37 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:57.789 23:33:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:15:57.789 23:33:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:57.789 23:33:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:57.789 23:33:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:57.789 23:33:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:57.789 23:33:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:57.789 23:33:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:57.789 23:33:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:57.789 23:33:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:57.789 23:33:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:57.789 23:33:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:57.789 23:33:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:57.789 23:33:37 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:57.789 23:33:37 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:57.789 23:33:37 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:57.789 23:33:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:57.789 "name": "Existed_Raid", 00:15:57.789 "uuid": "4e4dd842-50c3-45d9-9f5d-2dc04a8759ee", 00:15:57.789 "strip_size_kb": 0, 00:15:57.789 "state": "configuring", 00:15:57.789 "raid_level": "raid1", 00:15:57.789 "superblock": true, 00:15:57.789 "num_base_bdevs": 2, 00:15:57.789 "num_base_bdevs_discovered": 0, 00:15:57.789 "num_base_bdevs_operational": 2, 00:15:57.789 "base_bdevs_list": [ 00:15:57.789 { 00:15:57.789 "name": "BaseBdev1", 00:15:57.789 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:57.789 "is_configured": false, 00:15:57.789 "data_offset": 0, 00:15:57.789 "data_size": 0 00:15:57.789 }, 00:15:57.789 { 00:15:57.789 "name": "BaseBdev2", 00:15:57.789 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:57.789 "is_configured": false, 00:15:57.789 "data_offset": 0, 00:15:57.789 "data_size": 0 00:15:57.789 } 00:15:57.789 ] 00:15:57.789 }' 00:15:57.789 23:33:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:57.789 23:33:37 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:58.357 23:33:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:58.357 23:33:37 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.357 23:33:37 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:58.357 [2024-09-30 23:33:37.991164] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:58.357 [2024-09-30 23:33:37.991253] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring 00:15:58.357 23:33:37 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.357 23:33:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:15:58.357 23:33:37 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.357 23:33:37 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:58.357 [2024-09-30 23:33:38.003180] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:58.357 [2024-09-30 23:33:38.003251] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:58.357 [2024-09-30 23:33:38.003276] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:58.357 [2024-09-30 23:33:38.003299] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:58.357 23:33:38 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.357 23:33:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev1 00:15:58.357 23:33:38 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.357 23:33:38 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:58.357 [2024-09-30 23:33:38.031191] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:58.357 BaseBdev1 00:15:58.357 23:33:38 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.357 23:33:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:15:58.357 23:33:38 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:15:58.357 23:33:38 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:15:58.358 23:33:38 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@901 -- # local i 00:15:58.358 23:33:38 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:15:58.358 23:33:38 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:15:58.358 23:33:38 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:15:58.358 23:33:38 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.358 23:33:38 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:58.358 23:33:38 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.358 23:33:38 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:58.358 23:33:38 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.358 23:33:38 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:58.358 [ 00:15:58.358 { 00:15:58.358 "name": "BaseBdev1", 00:15:58.358 "aliases": [ 00:15:58.358 "865e11a9-5ba4-4392-98cd-1cc9b58fb3f2" 00:15:58.358 ], 00:15:58.358 "product_name": "Malloc disk", 00:15:58.358 "block_size": 4096, 00:15:58.358 "num_blocks": 8192, 00:15:58.358 "uuid": "865e11a9-5ba4-4392-98cd-1cc9b58fb3f2", 00:15:58.358 "md_size": 32, 00:15:58.358 "md_interleave": false, 00:15:58.358 "dif_type": 0, 00:15:58.358 "assigned_rate_limits": { 00:15:58.358 "rw_ios_per_sec": 0, 00:15:58.358 "rw_mbytes_per_sec": 0, 00:15:58.358 "r_mbytes_per_sec": 0, 00:15:58.358 "w_mbytes_per_sec": 0 00:15:58.358 }, 00:15:58.358 "claimed": true, 00:15:58.358 "claim_type": "exclusive_write", 00:15:58.358 "zoned": false, 00:15:58.358 "supported_io_types": { 00:15:58.358 "read": true, 00:15:58.358 "write": true, 00:15:58.358 "unmap": true, 00:15:58.358 "flush": true, 00:15:58.358 "reset": true, 00:15:58.358 "nvme_admin": false, 00:15:58.358 "nvme_io": false, 00:15:58.358 "nvme_io_md": false, 00:15:58.358 "write_zeroes": true, 00:15:58.358 "zcopy": true, 00:15:58.358 "get_zone_info": false, 00:15:58.358 "zone_management": false, 00:15:58.358 "zone_append": false, 00:15:58.358 "compare": false, 00:15:58.358 "compare_and_write": false, 00:15:58.358 "abort": true, 00:15:58.358 "seek_hole": false, 00:15:58.358 "seek_data": false, 00:15:58.358 "copy": true, 00:15:58.358 "nvme_iov_md": false 00:15:58.358 }, 00:15:58.358 "memory_domains": [ 00:15:58.358 { 00:15:58.358 "dma_device_id": "system", 00:15:58.358 "dma_device_type": 1 00:15:58.358 }, 00:15:58.358 { 00:15:58.358 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:58.358 "dma_device_type": 2 00:15:58.358 } 00:15:58.358 ], 00:15:58.358 "driver_specific": {} 00:15:58.358 } 00:15:58.358 ] 00:15:58.358 23:33:38 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.358 23:33:38 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@907 -- # return 0 00:15:58.358 23:33:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:15:58.358 23:33:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:58.358 23:33:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:58.358 23:33:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:58.358 23:33:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:58.358 23:33:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:58.358 23:33:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:58.358 23:33:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:58.358 23:33:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:58.358 23:33:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:58.358 23:33:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:58.358 23:33:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:58.358 23:33:38 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.358 23:33:38 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:58.358 23:33:38 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.358 23:33:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:58.358 "name": "Existed_Raid", 00:15:58.358 "uuid": "b2540152-4e53-4112-bf47-0d80785912fe", 00:15:58.358 "strip_size_kb": 0, 00:15:58.358 "state": "configuring", 00:15:58.358 "raid_level": "raid1", 00:15:58.358 "superblock": true, 00:15:58.358 "num_base_bdevs": 2, 00:15:58.358 "num_base_bdevs_discovered": 1, 00:15:58.358 "num_base_bdevs_operational": 2, 00:15:58.358 "base_bdevs_list": [ 00:15:58.358 { 00:15:58.358 "name": "BaseBdev1", 00:15:58.358 "uuid": "865e11a9-5ba4-4392-98cd-1cc9b58fb3f2", 00:15:58.358 "is_configured": true, 00:15:58.358 "data_offset": 256, 00:15:58.358 "data_size": 7936 00:15:58.358 }, 00:15:58.358 { 00:15:58.358 "name": "BaseBdev2", 00:15:58.358 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:58.358 "is_configured": false, 00:15:58.358 "data_offset": 0, 00:15:58.358 "data_size": 0 00:15:58.358 } 00:15:58.358 ] 00:15:58.358 }' 00:15:58.358 23:33:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:58.358 23:33:38 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:58.927 23:33:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:58.927 23:33:38 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.927 23:33:38 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:58.927 [2024-09-30 23:33:38.514388] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:58.927 [2024-09-30 23:33:38.514469] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring 00:15:58.927 23:33:38 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.927 23:33:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:15:58.927 23:33:38 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.927 23:33:38 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:58.927 [2024-09-30 23:33:38.526436] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:58.927 [2024-09-30 23:33:38.528569] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:58.927 [2024-09-30 23:33:38.528642] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:58.927 23:33:38 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.927 23:33:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:15:58.927 23:33:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:58.927 23:33:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:15:58.927 23:33:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:58.927 23:33:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:58.927 23:33:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:58.927 23:33:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:58.927 23:33:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:58.927 23:33:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:58.927 23:33:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:58.927 23:33:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:58.927 23:33:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:58.927 23:33:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:58.927 23:33:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:58.927 23:33:38 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.927 23:33:38 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:58.927 23:33:38 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.927 23:33:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:58.927 "name": "Existed_Raid", 00:15:58.927 "uuid": "b3ff8c19-247c-49c7-b83f-d55fa8a1f87e", 00:15:58.927 "strip_size_kb": 0, 00:15:58.927 "state": "configuring", 00:15:58.927 "raid_level": "raid1", 00:15:58.927 "superblock": true, 00:15:58.927 "num_base_bdevs": 2, 00:15:58.927 "num_base_bdevs_discovered": 1, 00:15:58.927 "num_base_bdevs_operational": 2, 00:15:58.927 "base_bdevs_list": [ 00:15:58.927 { 00:15:58.927 "name": "BaseBdev1", 00:15:58.927 "uuid": "865e11a9-5ba4-4392-98cd-1cc9b58fb3f2", 00:15:58.927 "is_configured": true, 00:15:58.927 "data_offset": 256, 00:15:58.927 "data_size": 7936 00:15:58.927 }, 00:15:58.927 { 00:15:58.927 "name": "BaseBdev2", 00:15:58.927 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:58.927 "is_configured": false, 00:15:58.927 "data_offset": 0, 00:15:58.927 "data_size": 0 00:15:58.927 } 00:15:58.927 ] 00:15:58.927 }' 00:15:58.927 23:33:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:58.927 23:33:38 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:59.187 23:33:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev2 00:15:59.187 23:33:38 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.187 23:33:38 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:59.187 [2024-09-30 23:33:38.975847] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:59.187 [2024-09-30 23:33:38.976358] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:15:59.187 [2024-09-30 23:33:38.976462] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:15:59.187 [2024-09-30 23:33:38.976738] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:15:59.187 BaseBdev2 00:15:59.187 [2024-09-30 23:33:38.977046] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:15:59.187 [2024-09-30 23:33:38.977083] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980 00:15:59.187 [2024-09-30 23:33:38.977305] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:59.187 23:33:38 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.187 23:33:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:15:59.187 23:33:38 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:15:59.187 23:33:38 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:15:59.187 23:33:38 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@901 -- # local i 00:15:59.187 23:33:38 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:15:59.187 23:33:38 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:15:59.187 23:33:38 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:15:59.187 23:33:38 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.187 23:33:38 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:59.187 23:33:38 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.187 23:33:38 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:59.187 23:33:38 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.187 23:33:38 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:59.187 [ 00:15:59.187 { 00:15:59.187 "name": "BaseBdev2", 00:15:59.187 "aliases": [ 00:15:59.187 "08e40333-96c5-434e-8d99-423a7ec9a2ed" 00:15:59.187 ], 00:15:59.187 "product_name": "Malloc disk", 00:15:59.187 "block_size": 4096, 00:15:59.187 "num_blocks": 8192, 00:15:59.187 "uuid": "08e40333-96c5-434e-8d99-423a7ec9a2ed", 00:15:59.187 "md_size": 32, 00:15:59.188 "md_interleave": false, 00:15:59.188 "dif_type": 0, 00:15:59.188 "assigned_rate_limits": { 00:15:59.188 "rw_ios_per_sec": 0, 00:15:59.188 "rw_mbytes_per_sec": 0, 00:15:59.188 "r_mbytes_per_sec": 0, 00:15:59.188 "w_mbytes_per_sec": 0 00:15:59.188 }, 00:15:59.188 "claimed": true, 00:15:59.188 "claim_type": "exclusive_write", 00:15:59.188 "zoned": false, 00:15:59.188 "supported_io_types": { 00:15:59.188 "read": true, 00:15:59.188 "write": true, 00:15:59.188 "unmap": true, 00:15:59.188 "flush": true, 00:15:59.188 "reset": true, 00:15:59.188 "nvme_admin": false, 00:15:59.188 "nvme_io": false, 00:15:59.188 "nvme_io_md": false, 00:15:59.188 "write_zeroes": true, 00:15:59.188 "zcopy": true, 00:15:59.188 "get_zone_info": false, 00:15:59.188 "zone_management": false, 00:15:59.188 "zone_append": false, 00:15:59.188 "compare": false, 00:15:59.188 "compare_and_write": false, 00:15:59.188 "abort": true, 00:15:59.188 "seek_hole": false, 00:15:59.188 "seek_data": false, 00:15:59.188 "copy": true, 00:15:59.188 "nvme_iov_md": false 00:15:59.188 }, 00:15:59.188 "memory_domains": [ 00:15:59.188 { 00:15:59.188 "dma_device_id": "system", 00:15:59.188 "dma_device_type": 1 00:15:59.188 }, 00:15:59.188 { 00:15:59.188 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:59.188 "dma_device_type": 2 00:15:59.188 } 00:15:59.188 ], 00:15:59.188 "driver_specific": {} 00:15:59.188 } 00:15:59.188 ] 00:15:59.188 23:33:39 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.188 23:33:39 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@907 -- # return 0 00:15:59.188 23:33:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:59.188 23:33:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:59.188 23:33:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:15:59.188 23:33:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:59.188 23:33:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:59.188 23:33:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:59.188 23:33:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:59.188 23:33:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:59.188 23:33:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:59.188 23:33:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:59.188 23:33:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:59.188 23:33:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:59.188 23:33:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:59.188 23:33:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:59.188 23:33:39 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.188 23:33:39 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:59.188 23:33:39 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.447 23:33:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:59.447 "name": "Existed_Raid", 00:15:59.447 "uuid": "b3ff8c19-247c-49c7-b83f-d55fa8a1f87e", 00:15:59.447 "strip_size_kb": 0, 00:15:59.447 "state": "online", 00:15:59.447 "raid_level": "raid1", 00:15:59.447 "superblock": true, 00:15:59.447 "num_base_bdevs": 2, 00:15:59.447 "num_base_bdevs_discovered": 2, 00:15:59.447 "num_base_bdevs_operational": 2, 00:15:59.447 "base_bdevs_list": [ 00:15:59.447 { 00:15:59.447 "name": "BaseBdev1", 00:15:59.447 "uuid": "865e11a9-5ba4-4392-98cd-1cc9b58fb3f2", 00:15:59.447 "is_configured": true, 00:15:59.447 "data_offset": 256, 00:15:59.447 "data_size": 7936 00:15:59.447 }, 00:15:59.447 { 00:15:59.447 "name": "BaseBdev2", 00:15:59.447 "uuid": "08e40333-96c5-434e-8d99-423a7ec9a2ed", 00:15:59.447 "is_configured": true, 00:15:59.447 "data_offset": 256, 00:15:59.447 "data_size": 7936 00:15:59.447 } 00:15:59.447 ] 00:15:59.447 }' 00:15:59.447 23:33:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:59.447 23:33:39 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:59.705 23:33:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:15:59.705 23:33:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:15:59.705 23:33:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:59.705 23:33:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:59.705 23:33:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:15:59.705 23:33:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:59.705 23:33:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:59.705 23:33:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:15:59.705 23:33:39 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.705 23:33:39 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:59.705 [2024-09-30 23:33:39.427495] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:59.705 23:33:39 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.705 23:33:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:59.705 "name": "Existed_Raid", 00:15:59.705 "aliases": [ 00:15:59.705 "b3ff8c19-247c-49c7-b83f-d55fa8a1f87e" 00:15:59.705 ], 00:15:59.705 "product_name": "Raid Volume", 00:15:59.705 "block_size": 4096, 00:15:59.705 "num_blocks": 7936, 00:15:59.705 "uuid": "b3ff8c19-247c-49c7-b83f-d55fa8a1f87e", 00:15:59.705 "md_size": 32, 00:15:59.705 "md_interleave": false, 00:15:59.705 "dif_type": 0, 00:15:59.705 "assigned_rate_limits": { 00:15:59.705 "rw_ios_per_sec": 0, 00:15:59.705 "rw_mbytes_per_sec": 0, 00:15:59.705 "r_mbytes_per_sec": 0, 00:15:59.705 "w_mbytes_per_sec": 0 00:15:59.705 }, 00:15:59.705 "claimed": false, 00:15:59.706 "zoned": false, 00:15:59.706 "supported_io_types": { 00:15:59.706 "read": true, 00:15:59.706 "write": true, 00:15:59.706 "unmap": false, 00:15:59.706 "flush": false, 00:15:59.706 "reset": true, 00:15:59.706 "nvme_admin": false, 00:15:59.706 "nvme_io": false, 00:15:59.706 "nvme_io_md": false, 00:15:59.706 "write_zeroes": true, 00:15:59.706 "zcopy": false, 00:15:59.706 "get_zone_info": false, 00:15:59.706 "zone_management": false, 00:15:59.706 "zone_append": false, 00:15:59.706 "compare": false, 00:15:59.706 "compare_and_write": false, 00:15:59.706 "abort": false, 00:15:59.706 "seek_hole": false, 00:15:59.706 "seek_data": false, 00:15:59.706 "copy": false, 00:15:59.706 "nvme_iov_md": false 00:15:59.706 }, 00:15:59.706 "memory_domains": [ 00:15:59.706 { 00:15:59.706 "dma_device_id": "system", 00:15:59.706 "dma_device_type": 1 00:15:59.706 }, 00:15:59.706 { 00:15:59.706 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:59.706 "dma_device_type": 2 00:15:59.706 }, 00:15:59.706 { 00:15:59.706 "dma_device_id": "system", 00:15:59.706 "dma_device_type": 1 00:15:59.706 }, 00:15:59.706 { 00:15:59.706 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:59.706 "dma_device_type": 2 00:15:59.706 } 00:15:59.706 ], 00:15:59.706 "driver_specific": { 00:15:59.706 "raid": { 00:15:59.706 "uuid": "b3ff8c19-247c-49c7-b83f-d55fa8a1f87e", 00:15:59.706 "strip_size_kb": 0, 00:15:59.706 "state": "online", 00:15:59.706 "raid_level": "raid1", 00:15:59.706 "superblock": true, 00:15:59.706 "num_base_bdevs": 2, 00:15:59.706 "num_base_bdevs_discovered": 2, 00:15:59.706 "num_base_bdevs_operational": 2, 00:15:59.706 "base_bdevs_list": [ 00:15:59.706 { 00:15:59.706 "name": "BaseBdev1", 00:15:59.706 "uuid": "865e11a9-5ba4-4392-98cd-1cc9b58fb3f2", 00:15:59.706 "is_configured": true, 00:15:59.706 "data_offset": 256, 00:15:59.706 "data_size": 7936 00:15:59.706 }, 00:15:59.706 { 00:15:59.706 "name": "BaseBdev2", 00:15:59.706 "uuid": "08e40333-96c5-434e-8d99-423a7ec9a2ed", 00:15:59.706 "is_configured": true, 00:15:59.706 "data_offset": 256, 00:15:59.706 "data_size": 7936 00:15:59.706 } 00:15:59.706 ] 00:15:59.706 } 00:15:59.706 } 00:15:59.706 }' 00:15:59.706 23:33:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:59.706 23:33:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:15:59.706 BaseBdev2' 00:15:59.706 23:33:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:59.706 23:33:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:15:59.706 23:33:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:59.706 23:33:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:59.706 23:33:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:15:59.706 23:33:39 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.706 23:33:39 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:59.966 23:33:39 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.966 23:33:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:15:59.966 23:33:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:15:59.966 23:33:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:59.966 23:33:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:15:59.966 23:33:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:59.966 23:33:39 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.966 23:33:39 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:59.966 23:33:39 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.966 23:33:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:15:59.966 23:33:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:15:59.966 23:33:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:15:59.966 23:33:39 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.966 23:33:39 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:59.966 [2024-09-30 23:33:39.639006] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:59.966 23:33:39 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.966 23:33:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@260 -- # local expected_state 00:15:59.966 23:33:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:15:59.966 23:33:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@198 -- # case $1 in 00:15:59.966 23:33:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@199 -- # return 0 00:15:59.966 23:33:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:15:59.966 23:33:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:15:59.966 23:33:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:59.966 23:33:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:59.966 23:33:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:59.966 23:33:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:59.966 23:33:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:59.966 23:33:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:59.966 23:33:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:59.966 23:33:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:59.966 23:33:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:59.966 23:33:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:59.966 23:33:39 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.966 23:33:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:59.966 23:33:39 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:59.966 23:33:39 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.966 23:33:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:59.966 "name": "Existed_Raid", 00:15:59.966 "uuid": "b3ff8c19-247c-49c7-b83f-d55fa8a1f87e", 00:15:59.966 "strip_size_kb": 0, 00:15:59.966 "state": "online", 00:15:59.966 "raid_level": "raid1", 00:15:59.966 "superblock": true, 00:15:59.966 "num_base_bdevs": 2, 00:15:59.966 "num_base_bdevs_discovered": 1, 00:15:59.966 "num_base_bdevs_operational": 1, 00:15:59.966 "base_bdevs_list": [ 00:15:59.966 { 00:15:59.966 "name": null, 00:15:59.966 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:59.966 "is_configured": false, 00:15:59.966 "data_offset": 0, 00:15:59.966 "data_size": 7936 00:15:59.966 }, 00:15:59.966 { 00:15:59.966 "name": "BaseBdev2", 00:15:59.966 "uuid": "08e40333-96c5-434e-8d99-423a7ec9a2ed", 00:15:59.966 "is_configured": true, 00:15:59.966 "data_offset": 256, 00:15:59.966 "data_size": 7936 00:15:59.966 } 00:15:59.966 ] 00:15:59.966 }' 00:15:59.966 23:33:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:59.966 23:33:39 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:00.536 23:33:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:16:00.536 23:33:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:00.536 23:33:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:00.536 23:33:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:00.536 23:33:40 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:00.536 23:33:40 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:00.536 23:33:40 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:00.536 23:33:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:00.536 23:33:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:00.536 23:33:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:16:00.536 23:33:40 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:00.536 23:33:40 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:00.536 [2024-09-30 23:33:40.148197] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:00.536 [2024-09-30 23:33:40.148303] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:00.536 [2024-09-30 23:33:40.170543] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:00.536 [2024-09-30 23:33:40.170593] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:00.536 [2024-09-30 23:33:40.170608] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline 00:16:00.536 23:33:40 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:00.536 23:33:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:00.536 23:33:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:00.536 23:33:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:00.536 23:33:40 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:00.536 23:33:40 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:00.536 23:33:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:16:00.536 23:33:40 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:00.536 23:33:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:16:00.536 23:33:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:16:00.536 23:33:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:16:00.536 23:33:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@326 -- # killprocess 97594 00:16:00.536 23:33:40 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@950 -- # '[' -z 97594 ']' 00:16:00.536 23:33:40 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@954 -- # kill -0 97594 00:16:00.536 23:33:40 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@955 -- # uname 00:16:00.536 23:33:40 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:00.536 23:33:40 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 97594 00:16:00.536 killing process with pid 97594 00:16:00.536 23:33:40 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:00.536 23:33:40 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:00.536 23:33:40 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@968 -- # echo 'killing process with pid 97594' 00:16:00.536 23:33:40 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@969 -- # kill 97594 00:16:00.536 [2024-09-30 23:33:40.264399] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:00.536 23:33:40 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@974 -- # wait 97594 00:16:00.536 [2024-09-30 23:33:40.265964] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:00.796 23:33:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@328 -- # return 0 00:16:00.796 00:16:00.796 real 0m4.034s 00:16:00.796 user 0m6.109s 00:16:00.796 sys 0m0.922s 00:16:00.796 23:33:40 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:00.796 23:33:40 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:00.796 ************************************ 00:16:00.796 END TEST raid_state_function_test_sb_md_separate 00:16:00.796 ************************************ 00:16:01.056 23:33:40 bdev_raid -- bdev/bdev_raid.sh@1005 -- # run_test raid_superblock_test_md_separate raid_superblock_test raid1 2 00:16:01.056 23:33:40 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:16:01.056 23:33:40 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:01.056 23:33:40 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:01.056 ************************************ 00:16:01.056 START TEST raid_superblock_test_md_separate 00:16:01.056 ************************************ 00:16:01.056 23:33:40 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1125 -- # raid_superblock_test raid1 2 00:16:01.056 23:33:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:16:01.056 23:33:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:16:01.056 23:33:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:16:01.056 23:33:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:16:01.056 23:33:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:16:01.056 23:33:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:16:01.056 23:33:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:16:01.056 23:33:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:16:01.056 23:33:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:16:01.056 23:33:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@399 -- # local strip_size 00:16:01.056 23:33:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:16:01.056 23:33:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:16:01.056 23:33:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:16:01.056 23:33:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:16:01.056 23:33:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:16:01.056 23:33:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@412 -- # raid_pid=97835 00:16:01.056 23:33:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:16:01.056 23:33:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@413 -- # waitforlisten 97835 00:16:01.056 23:33:40 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@831 -- # '[' -z 97835 ']' 00:16:01.056 23:33:40 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:01.056 23:33:40 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:01.056 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:01.056 23:33:40 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:01.056 23:33:40 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:01.056 23:33:40 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:01.056 [2024-09-30 23:33:40.810721] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:16:01.056 [2024-09-30 23:33:40.810906] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid97835 ] 00:16:01.316 [2024-09-30 23:33:40.971836] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:01.316 [2024-09-30 23:33:41.042652] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:16:01.316 [2024-09-30 23:33:41.121250] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:01.316 [2024-09-30 23:33:41.121292] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:01.886 23:33:41 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:01.886 23:33:41 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@864 -- # return 0 00:16:01.886 23:33:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:16:01.886 23:33:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:01.886 23:33:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:16:01.886 23:33:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:16:01.886 23:33:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:16:01.886 23:33:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:01.886 23:33:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:16:01.886 23:33:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:01.886 23:33:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b malloc1 00:16:01.886 23:33:41 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:01.886 23:33:41 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:01.886 malloc1 00:16:01.886 23:33:41 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:01.886 23:33:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:01.886 23:33:41 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:01.886 23:33:41 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:01.886 [2024-09-30 23:33:41.650242] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:01.886 [2024-09-30 23:33:41.650309] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:01.886 [2024-09-30 23:33:41.650331] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:16:01.886 [2024-09-30 23:33:41.650351] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:01.886 [2024-09-30 23:33:41.652573] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:01.886 [2024-09-30 23:33:41.652612] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:01.886 pt1 00:16:01.886 23:33:41 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:01.886 23:33:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:16:01.886 23:33:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:01.886 23:33:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:16:01.886 23:33:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:16:01.886 23:33:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:16:01.886 23:33:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:01.886 23:33:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:16:01.886 23:33:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:01.886 23:33:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b malloc2 00:16:01.886 23:33:41 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:01.886 23:33:41 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:01.886 malloc2 00:16:01.886 23:33:41 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:01.886 23:33:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:01.886 23:33:41 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:01.886 23:33:41 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:01.886 [2024-09-30 23:33:41.704072] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:01.886 [2024-09-30 23:33:41.704189] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:01.886 [2024-09-30 23:33:41.704227] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:16:01.886 [2024-09-30 23:33:41.704253] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:01.886 [2024-09-30 23:33:41.708001] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:01.886 [2024-09-30 23:33:41.708057] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:01.886 pt2 00:16:01.886 23:33:41 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:01.886 23:33:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:16:01.886 23:33:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:01.886 23:33:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:16:01.886 23:33:41 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:01.886 23:33:41 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:01.886 [2024-09-30 23:33:41.716282] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:01.886 [2024-09-30 23:33:41.718847] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:01.886 [2024-09-30 23:33:41.719047] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:16:01.886 [2024-09-30 23:33:41.719068] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:16:01.887 [2024-09-30 23:33:41.719184] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:16:01.887 [2024-09-30 23:33:41.719330] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:16:01.887 [2024-09-30 23:33:41.719361] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:16:01.887 [2024-09-30 23:33:41.719484] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:01.887 23:33:41 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:01.887 23:33:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:01.887 23:33:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:01.887 23:33:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:01.887 23:33:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:01.887 23:33:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:01.887 23:33:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:01.887 23:33:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:01.887 23:33:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:01.887 23:33:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:01.887 23:33:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:01.887 23:33:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:01.887 23:33:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:01.887 23:33:41 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:01.887 23:33:41 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:02.146 23:33:41 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:02.146 23:33:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:02.146 "name": "raid_bdev1", 00:16:02.146 "uuid": "45d28894-8b63-4342-bfd3-aa95ce9ded49", 00:16:02.146 "strip_size_kb": 0, 00:16:02.146 "state": "online", 00:16:02.146 "raid_level": "raid1", 00:16:02.146 "superblock": true, 00:16:02.146 "num_base_bdevs": 2, 00:16:02.146 "num_base_bdevs_discovered": 2, 00:16:02.146 "num_base_bdevs_operational": 2, 00:16:02.146 "base_bdevs_list": [ 00:16:02.146 { 00:16:02.146 "name": "pt1", 00:16:02.146 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:02.146 "is_configured": true, 00:16:02.146 "data_offset": 256, 00:16:02.146 "data_size": 7936 00:16:02.146 }, 00:16:02.146 { 00:16:02.146 "name": "pt2", 00:16:02.146 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:02.146 "is_configured": true, 00:16:02.146 "data_offset": 256, 00:16:02.146 "data_size": 7936 00:16:02.146 } 00:16:02.146 ] 00:16:02.146 }' 00:16:02.146 23:33:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:02.146 23:33:41 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:02.406 23:33:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:16:02.406 23:33:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:16:02.406 23:33:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:02.406 23:33:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:02.406 23:33:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:16:02.406 23:33:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:02.406 23:33:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:02.406 23:33:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:02.406 23:33:42 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:02.406 23:33:42 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:02.406 [2024-09-30 23:33:42.151943] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:02.406 23:33:42 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:02.406 23:33:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:02.406 "name": "raid_bdev1", 00:16:02.406 "aliases": [ 00:16:02.406 "45d28894-8b63-4342-bfd3-aa95ce9ded49" 00:16:02.406 ], 00:16:02.406 "product_name": "Raid Volume", 00:16:02.406 "block_size": 4096, 00:16:02.406 "num_blocks": 7936, 00:16:02.406 "uuid": "45d28894-8b63-4342-bfd3-aa95ce9ded49", 00:16:02.406 "md_size": 32, 00:16:02.406 "md_interleave": false, 00:16:02.406 "dif_type": 0, 00:16:02.406 "assigned_rate_limits": { 00:16:02.406 "rw_ios_per_sec": 0, 00:16:02.406 "rw_mbytes_per_sec": 0, 00:16:02.406 "r_mbytes_per_sec": 0, 00:16:02.406 "w_mbytes_per_sec": 0 00:16:02.406 }, 00:16:02.406 "claimed": false, 00:16:02.406 "zoned": false, 00:16:02.406 "supported_io_types": { 00:16:02.406 "read": true, 00:16:02.406 "write": true, 00:16:02.406 "unmap": false, 00:16:02.406 "flush": false, 00:16:02.406 "reset": true, 00:16:02.406 "nvme_admin": false, 00:16:02.406 "nvme_io": false, 00:16:02.406 "nvme_io_md": false, 00:16:02.406 "write_zeroes": true, 00:16:02.406 "zcopy": false, 00:16:02.406 "get_zone_info": false, 00:16:02.406 "zone_management": false, 00:16:02.406 "zone_append": false, 00:16:02.406 "compare": false, 00:16:02.406 "compare_and_write": false, 00:16:02.406 "abort": false, 00:16:02.406 "seek_hole": false, 00:16:02.406 "seek_data": false, 00:16:02.406 "copy": false, 00:16:02.406 "nvme_iov_md": false 00:16:02.406 }, 00:16:02.406 "memory_domains": [ 00:16:02.406 { 00:16:02.406 "dma_device_id": "system", 00:16:02.406 "dma_device_type": 1 00:16:02.406 }, 00:16:02.406 { 00:16:02.406 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:02.406 "dma_device_type": 2 00:16:02.406 }, 00:16:02.406 { 00:16:02.406 "dma_device_id": "system", 00:16:02.406 "dma_device_type": 1 00:16:02.406 }, 00:16:02.406 { 00:16:02.406 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:02.406 "dma_device_type": 2 00:16:02.406 } 00:16:02.406 ], 00:16:02.406 "driver_specific": { 00:16:02.406 "raid": { 00:16:02.406 "uuid": "45d28894-8b63-4342-bfd3-aa95ce9ded49", 00:16:02.406 "strip_size_kb": 0, 00:16:02.406 "state": "online", 00:16:02.406 "raid_level": "raid1", 00:16:02.406 "superblock": true, 00:16:02.406 "num_base_bdevs": 2, 00:16:02.406 "num_base_bdevs_discovered": 2, 00:16:02.406 "num_base_bdevs_operational": 2, 00:16:02.406 "base_bdevs_list": [ 00:16:02.406 { 00:16:02.406 "name": "pt1", 00:16:02.406 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:02.406 "is_configured": true, 00:16:02.406 "data_offset": 256, 00:16:02.406 "data_size": 7936 00:16:02.406 }, 00:16:02.406 { 00:16:02.406 "name": "pt2", 00:16:02.406 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:02.406 "is_configured": true, 00:16:02.406 "data_offset": 256, 00:16:02.406 "data_size": 7936 00:16:02.406 } 00:16:02.406 ] 00:16:02.406 } 00:16:02.406 } 00:16:02.406 }' 00:16:02.406 23:33:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:02.406 23:33:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:16:02.406 pt2' 00:16:02.406 23:33:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:02.666 23:33:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:16:02.666 23:33:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:02.666 23:33:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:16:02.666 23:33:42 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:02.666 23:33:42 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:02.666 23:33:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:02.666 23:33:42 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:02.666 23:33:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:16:02.666 23:33:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:16:02.666 23:33:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:02.666 23:33:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:02.666 23:33:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:16:02.666 23:33:42 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:02.666 23:33:42 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:02.666 23:33:42 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:02.666 23:33:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:16:02.666 23:33:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:16:02.666 23:33:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:02.666 23:33:42 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:02.666 23:33:42 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:02.666 23:33:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:16:02.666 [2024-09-30 23:33:42.363375] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:02.666 23:33:42 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:02.666 23:33:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=45d28894-8b63-4342-bfd3-aa95ce9ded49 00:16:02.666 23:33:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@436 -- # '[' -z 45d28894-8b63-4342-bfd3-aa95ce9ded49 ']' 00:16:02.666 23:33:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:02.666 23:33:42 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:02.666 23:33:42 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:02.666 [2024-09-30 23:33:42.411089] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:02.666 [2024-09-30 23:33:42.411114] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:02.666 [2024-09-30 23:33:42.411177] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:02.666 [2024-09-30 23:33:42.411234] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:02.666 [2024-09-30 23:33:42.411244] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:16:02.666 23:33:42 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:02.666 23:33:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:16:02.666 23:33:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:02.666 23:33:42 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:02.666 23:33:42 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:02.666 23:33:42 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:02.666 23:33:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:16:02.666 23:33:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:16:02.666 23:33:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:16:02.666 23:33:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:16:02.666 23:33:42 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:02.666 23:33:42 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:02.666 23:33:42 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:02.666 23:33:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:16:02.666 23:33:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:16:02.666 23:33:42 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:02.666 23:33:42 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:02.667 23:33:42 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:02.667 23:33:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:16:02.667 23:33:42 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:02.667 23:33:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:16:02.667 23:33:42 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:02.667 23:33:42 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:02.927 23:33:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:16:02.927 23:33:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:16:02.927 23:33:42 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@650 -- # local es=0 00:16:02.927 23:33:42 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:16:02.927 23:33:42 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:16:02.927 23:33:42 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:02.927 23:33:42 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:16:02.927 23:33:42 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:02.927 23:33:42 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:16:02.927 23:33:42 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:02.927 23:33:42 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:02.927 [2024-09-30 23:33:42.534937] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:16:02.927 [2024-09-30 23:33:42.537054] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:16:02.927 [2024-09-30 23:33:42.537113] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:16:02.927 [2024-09-30 23:33:42.537160] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:16:02.927 [2024-09-30 23:33:42.537174] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:02.927 [2024-09-30 23:33:42.537187] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state configuring 00:16:02.927 request: 00:16:02.927 { 00:16:02.927 "name": "raid_bdev1", 00:16:02.927 "raid_level": "raid1", 00:16:02.927 "base_bdevs": [ 00:16:02.927 "malloc1", 00:16:02.927 "malloc2" 00:16:02.927 ], 00:16:02.927 "superblock": false, 00:16:02.927 "method": "bdev_raid_create", 00:16:02.927 "req_id": 1 00:16:02.927 } 00:16:02.927 Got JSON-RPC error response 00:16:02.927 response: 00:16:02.927 { 00:16:02.927 "code": -17, 00:16:02.927 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:16:02.927 } 00:16:02.927 23:33:42 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:16:02.927 23:33:42 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@653 -- # es=1 00:16:02.927 23:33:42 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:02.927 23:33:42 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:02.927 23:33:42 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:02.927 23:33:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:02.927 23:33:42 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:02.927 23:33:42 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:02.927 23:33:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:16:02.927 23:33:42 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:02.927 23:33:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:16:02.927 23:33:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:16:02.927 23:33:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:02.927 23:33:42 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:02.927 23:33:42 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:02.927 [2024-09-30 23:33:42.602740] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:02.927 [2024-09-30 23:33:42.602968] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:02.927 [2024-09-30 23:33:42.603055] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:16:02.927 [2024-09-30 23:33:42.603117] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:02.927 [2024-09-30 23:33:42.605591] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:02.927 [2024-09-30 23:33:42.605774] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:02.927 [2024-09-30 23:33:42.605822] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:16:02.927 [2024-09-30 23:33:42.605854] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:02.927 pt1 00:16:02.927 23:33:42 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:02.927 23:33:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:16:02.927 23:33:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:02.927 23:33:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:02.927 23:33:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:02.927 23:33:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:02.927 23:33:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:02.927 23:33:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:02.927 23:33:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:02.927 23:33:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:02.927 23:33:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:02.927 23:33:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:02.927 23:33:42 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:02.927 23:33:42 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:02.927 23:33:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:02.927 23:33:42 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:02.927 23:33:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:02.927 "name": "raid_bdev1", 00:16:02.927 "uuid": "45d28894-8b63-4342-bfd3-aa95ce9ded49", 00:16:02.927 "strip_size_kb": 0, 00:16:02.927 "state": "configuring", 00:16:02.927 "raid_level": "raid1", 00:16:02.927 "superblock": true, 00:16:02.927 "num_base_bdevs": 2, 00:16:02.927 "num_base_bdevs_discovered": 1, 00:16:02.927 "num_base_bdevs_operational": 2, 00:16:02.927 "base_bdevs_list": [ 00:16:02.927 { 00:16:02.927 "name": "pt1", 00:16:02.927 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:02.927 "is_configured": true, 00:16:02.927 "data_offset": 256, 00:16:02.927 "data_size": 7936 00:16:02.927 }, 00:16:02.927 { 00:16:02.927 "name": null, 00:16:02.927 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:02.927 "is_configured": false, 00:16:02.927 "data_offset": 256, 00:16:02.927 "data_size": 7936 00:16:02.927 } 00:16:02.927 ] 00:16:02.927 }' 00:16:02.927 23:33:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:02.927 23:33:42 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:03.187 23:33:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:16:03.187 23:33:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:16:03.187 23:33:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:16:03.187 23:33:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:03.187 23:33:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:03.187 23:33:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:03.187 [2024-09-30 23:33:43.018028] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:03.187 [2024-09-30 23:33:43.018093] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:03.187 [2024-09-30 23:33:43.018113] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:16:03.187 [2024-09-30 23:33:43.018121] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:03.187 [2024-09-30 23:33:43.018267] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:03.187 [2024-09-30 23:33:43.018281] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:03.187 [2024-09-30 23:33:43.018317] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:16:03.187 [2024-09-30 23:33:43.018339] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:03.187 [2024-09-30 23:33:43.018415] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:16:03.187 [2024-09-30 23:33:43.018432] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:16:03.187 [2024-09-30 23:33:43.018498] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:16:03.187 [2024-09-30 23:33:43.018577] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:16:03.187 [2024-09-30 23:33:43.018594] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006980 00:16:03.187 [2024-09-30 23:33:43.018650] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:03.187 pt2 00:16:03.187 23:33:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:03.187 23:33:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:16:03.187 23:33:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:16:03.187 23:33:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:03.187 23:33:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:03.187 23:33:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:03.187 23:33:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:03.187 23:33:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:03.187 23:33:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:03.187 23:33:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:03.187 23:33:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:03.187 23:33:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:03.187 23:33:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:03.187 23:33:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:03.187 23:33:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:03.187 23:33:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:03.187 23:33:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:03.447 23:33:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:03.447 23:33:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:03.447 "name": "raid_bdev1", 00:16:03.447 "uuid": "45d28894-8b63-4342-bfd3-aa95ce9ded49", 00:16:03.447 "strip_size_kb": 0, 00:16:03.447 "state": "online", 00:16:03.447 "raid_level": "raid1", 00:16:03.447 "superblock": true, 00:16:03.447 "num_base_bdevs": 2, 00:16:03.447 "num_base_bdevs_discovered": 2, 00:16:03.447 "num_base_bdevs_operational": 2, 00:16:03.447 "base_bdevs_list": [ 00:16:03.447 { 00:16:03.447 "name": "pt1", 00:16:03.447 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:03.447 "is_configured": true, 00:16:03.447 "data_offset": 256, 00:16:03.447 "data_size": 7936 00:16:03.447 }, 00:16:03.447 { 00:16:03.447 "name": "pt2", 00:16:03.447 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:03.447 "is_configured": true, 00:16:03.447 "data_offset": 256, 00:16:03.447 "data_size": 7936 00:16:03.447 } 00:16:03.447 ] 00:16:03.447 }' 00:16:03.447 23:33:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:03.447 23:33:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:03.706 23:33:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:16:03.706 23:33:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:16:03.706 23:33:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:03.706 23:33:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:03.706 23:33:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:16:03.706 23:33:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:03.706 23:33:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:03.706 23:33:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:03.706 23:33:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:03.706 23:33:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:03.706 [2024-09-30 23:33:43.433542] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:03.706 23:33:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:03.706 23:33:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:03.706 "name": "raid_bdev1", 00:16:03.706 "aliases": [ 00:16:03.706 "45d28894-8b63-4342-bfd3-aa95ce9ded49" 00:16:03.706 ], 00:16:03.706 "product_name": "Raid Volume", 00:16:03.706 "block_size": 4096, 00:16:03.706 "num_blocks": 7936, 00:16:03.706 "uuid": "45d28894-8b63-4342-bfd3-aa95ce9ded49", 00:16:03.706 "md_size": 32, 00:16:03.706 "md_interleave": false, 00:16:03.706 "dif_type": 0, 00:16:03.706 "assigned_rate_limits": { 00:16:03.706 "rw_ios_per_sec": 0, 00:16:03.706 "rw_mbytes_per_sec": 0, 00:16:03.706 "r_mbytes_per_sec": 0, 00:16:03.706 "w_mbytes_per_sec": 0 00:16:03.706 }, 00:16:03.706 "claimed": false, 00:16:03.706 "zoned": false, 00:16:03.706 "supported_io_types": { 00:16:03.706 "read": true, 00:16:03.706 "write": true, 00:16:03.706 "unmap": false, 00:16:03.706 "flush": false, 00:16:03.706 "reset": true, 00:16:03.706 "nvme_admin": false, 00:16:03.706 "nvme_io": false, 00:16:03.706 "nvme_io_md": false, 00:16:03.706 "write_zeroes": true, 00:16:03.706 "zcopy": false, 00:16:03.706 "get_zone_info": false, 00:16:03.706 "zone_management": false, 00:16:03.706 "zone_append": false, 00:16:03.706 "compare": false, 00:16:03.706 "compare_and_write": false, 00:16:03.706 "abort": false, 00:16:03.706 "seek_hole": false, 00:16:03.706 "seek_data": false, 00:16:03.706 "copy": false, 00:16:03.706 "nvme_iov_md": false 00:16:03.706 }, 00:16:03.706 "memory_domains": [ 00:16:03.706 { 00:16:03.706 "dma_device_id": "system", 00:16:03.706 "dma_device_type": 1 00:16:03.706 }, 00:16:03.706 { 00:16:03.706 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:03.706 "dma_device_type": 2 00:16:03.706 }, 00:16:03.706 { 00:16:03.706 "dma_device_id": "system", 00:16:03.706 "dma_device_type": 1 00:16:03.706 }, 00:16:03.706 { 00:16:03.706 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:03.706 "dma_device_type": 2 00:16:03.706 } 00:16:03.706 ], 00:16:03.706 "driver_specific": { 00:16:03.706 "raid": { 00:16:03.706 "uuid": "45d28894-8b63-4342-bfd3-aa95ce9ded49", 00:16:03.706 "strip_size_kb": 0, 00:16:03.706 "state": "online", 00:16:03.706 "raid_level": "raid1", 00:16:03.706 "superblock": true, 00:16:03.706 "num_base_bdevs": 2, 00:16:03.706 "num_base_bdevs_discovered": 2, 00:16:03.706 "num_base_bdevs_operational": 2, 00:16:03.706 "base_bdevs_list": [ 00:16:03.706 { 00:16:03.706 "name": "pt1", 00:16:03.706 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:03.706 "is_configured": true, 00:16:03.706 "data_offset": 256, 00:16:03.706 "data_size": 7936 00:16:03.706 }, 00:16:03.706 { 00:16:03.706 "name": "pt2", 00:16:03.706 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:03.706 "is_configured": true, 00:16:03.706 "data_offset": 256, 00:16:03.706 "data_size": 7936 00:16:03.706 } 00:16:03.707 ] 00:16:03.707 } 00:16:03.707 } 00:16:03.707 }' 00:16:03.707 23:33:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:03.707 23:33:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:16:03.707 pt2' 00:16:03.707 23:33:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:03.707 23:33:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:16:03.707 23:33:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:03.707 23:33:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:16:03.966 23:33:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:03.966 23:33:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:03.966 23:33:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:03.966 23:33:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:03.966 23:33:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:16:03.966 23:33:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:16:03.966 23:33:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:03.966 23:33:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:16:03.966 23:33:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:03.966 23:33:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:03.966 23:33:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:03.966 23:33:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:03.966 23:33:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:16:03.966 23:33:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:16:03.966 23:33:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:16:03.966 23:33:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:03.967 23:33:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:03.967 23:33:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:03.967 [2024-09-30 23:33:43.665160] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:03.967 23:33:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:03.967 23:33:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # '[' 45d28894-8b63-4342-bfd3-aa95ce9ded49 '!=' 45d28894-8b63-4342-bfd3-aa95ce9ded49 ']' 00:16:03.967 23:33:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:16:03.967 23:33:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@198 -- # case $1 in 00:16:03.967 23:33:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@199 -- # return 0 00:16:03.967 23:33:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:16:03.967 23:33:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:03.967 23:33:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:03.967 [2024-09-30 23:33:43.696897] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:16:03.967 23:33:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:03.967 23:33:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:03.967 23:33:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:03.967 23:33:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:03.967 23:33:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:03.967 23:33:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:03.967 23:33:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:03.967 23:33:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:03.967 23:33:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:03.967 23:33:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:03.967 23:33:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:03.967 23:33:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:03.967 23:33:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:03.967 23:33:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:03.967 23:33:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:03.967 23:33:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:03.967 23:33:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:03.967 "name": "raid_bdev1", 00:16:03.967 "uuid": "45d28894-8b63-4342-bfd3-aa95ce9ded49", 00:16:03.967 "strip_size_kb": 0, 00:16:03.967 "state": "online", 00:16:03.967 "raid_level": "raid1", 00:16:03.967 "superblock": true, 00:16:03.967 "num_base_bdevs": 2, 00:16:03.967 "num_base_bdevs_discovered": 1, 00:16:03.967 "num_base_bdevs_operational": 1, 00:16:03.967 "base_bdevs_list": [ 00:16:03.967 { 00:16:03.967 "name": null, 00:16:03.967 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:03.967 "is_configured": false, 00:16:03.967 "data_offset": 0, 00:16:03.967 "data_size": 7936 00:16:03.967 }, 00:16:03.967 { 00:16:03.967 "name": "pt2", 00:16:03.967 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:03.967 "is_configured": true, 00:16:03.967 "data_offset": 256, 00:16:03.967 "data_size": 7936 00:16:03.967 } 00:16:03.967 ] 00:16:03.967 }' 00:16:03.967 23:33:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:03.967 23:33:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:04.536 23:33:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:04.536 23:33:44 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.536 23:33:44 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:04.536 [2024-09-30 23:33:44.156047] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:04.536 [2024-09-30 23:33:44.156076] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:04.536 [2024-09-30 23:33:44.156122] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:04.536 [2024-09-30 23:33:44.156160] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:04.536 [2024-09-30 23:33:44.156168] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name raid_bdev1, state offline 00:16:04.536 23:33:44 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.536 23:33:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:16:04.536 23:33:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:04.536 23:33:44 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.536 23:33:44 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:04.536 23:33:44 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.536 23:33:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:16:04.536 23:33:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:16:04.536 23:33:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:16:04.536 23:33:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:16:04.536 23:33:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:16:04.536 23:33:44 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.536 23:33:44 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:04.536 23:33:44 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.536 23:33:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:16:04.536 23:33:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:16:04.536 23:33:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:16:04.536 23:33:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:16:04.536 23:33:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@519 -- # i=1 00:16:04.536 23:33:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:04.536 23:33:44 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.536 23:33:44 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:04.536 [2024-09-30 23:33:44.219961] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:04.536 [2024-09-30 23:33:44.220004] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:04.536 [2024-09-30 23:33:44.220021] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:16:04.536 [2024-09-30 23:33:44.220029] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:04.536 [2024-09-30 23:33:44.222196] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:04.536 [2024-09-30 23:33:44.222230] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:04.536 [2024-09-30 23:33:44.222272] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:16:04.536 [2024-09-30 23:33:44.222298] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:04.536 [2024-09-30 23:33:44.222351] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:16:04.536 [2024-09-30 23:33:44.222358] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:16:04.536 [2024-09-30 23:33:44.222427] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:16:04.536 [2024-09-30 23:33:44.222500] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:16:04.536 [2024-09-30 23:33:44.222514] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006d00 00:16:04.536 [2024-09-30 23:33:44.222569] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:04.536 pt2 00:16:04.536 23:33:44 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.536 23:33:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:04.536 23:33:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:04.536 23:33:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:04.536 23:33:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:04.536 23:33:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:04.536 23:33:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:04.536 23:33:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:04.536 23:33:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:04.536 23:33:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:04.536 23:33:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:04.536 23:33:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:04.536 23:33:44 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.536 23:33:44 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:04.536 23:33:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:04.536 23:33:44 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.536 23:33:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:04.536 "name": "raid_bdev1", 00:16:04.537 "uuid": "45d28894-8b63-4342-bfd3-aa95ce9ded49", 00:16:04.537 "strip_size_kb": 0, 00:16:04.537 "state": "online", 00:16:04.537 "raid_level": "raid1", 00:16:04.537 "superblock": true, 00:16:04.537 "num_base_bdevs": 2, 00:16:04.537 "num_base_bdevs_discovered": 1, 00:16:04.537 "num_base_bdevs_operational": 1, 00:16:04.537 "base_bdevs_list": [ 00:16:04.537 { 00:16:04.537 "name": null, 00:16:04.537 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:04.537 "is_configured": false, 00:16:04.537 "data_offset": 256, 00:16:04.537 "data_size": 7936 00:16:04.537 }, 00:16:04.537 { 00:16:04.537 "name": "pt2", 00:16:04.537 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:04.537 "is_configured": true, 00:16:04.537 "data_offset": 256, 00:16:04.537 "data_size": 7936 00:16:04.537 } 00:16:04.537 ] 00:16:04.537 }' 00:16:04.537 23:33:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:04.537 23:33:44 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:05.106 23:33:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:05.106 23:33:44 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:05.106 23:33:44 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:05.106 [2024-09-30 23:33:44.695161] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:05.106 [2024-09-30 23:33:44.695184] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:05.106 [2024-09-30 23:33:44.695229] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:05.106 [2024-09-30 23:33:44.695266] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:05.106 [2024-09-30 23:33:44.695280] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name raid_bdev1, state offline 00:16:05.106 23:33:44 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:05.106 23:33:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:05.106 23:33:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:16:05.106 23:33:44 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:05.106 23:33:44 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:05.106 23:33:44 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:05.106 23:33:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:16:05.106 23:33:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:16:05.106 23:33:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:16:05.106 23:33:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:05.106 23:33:44 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:05.106 23:33:44 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:05.106 [2024-09-30 23:33:44.759039] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:05.106 [2024-09-30 23:33:44.759083] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:05.106 [2024-09-30 23:33:44.759098] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:16:05.106 [2024-09-30 23:33:44.759111] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:05.106 [2024-09-30 23:33:44.761242] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:05.106 [2024-09-30 23:33:44.761279] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:05.106 [2024-09-30 23:33:44.761314] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:16:05.106 [2024-09-30 23:33:44.761347] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:05.106 [2024-09-30 23:33:44.761438] bdev_raid.c:3675:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:16:05.106 [2024-09-30 23:33:44.761458] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:05.106 [2024-09-30 23:33:44.761470] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007080 name raid_bdev1, state configuring 00:16:05.106 [2024-09-30 23:33:44.761509] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:05.106 [2024-09-30 23:33:44.761557] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007400 00:16:05.106 [2024-09-30 23:33:44.761571] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:16:05.106 [2024-09-30 23:33:44.761630] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:16:05.106 [2024-09-30 23:33:44.761699] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007400 00:16:05.106 [2024-09-30 23:33:44.761710] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007400 00:16:05.106 [2024-09-30 23:33:44.761776] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:05.106 pt1 00:16:05.106 23:33:44 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:05.106 23:33:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:16:05.106 23:33:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:05.106 23:33:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:05.106 23:33:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:05.106 23:33:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:05.106 23:33:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:05.106 23:33:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:05.106 23:33:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:05.106 23:33:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:05.106 23:33:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:05.106 23:33:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:05.106 23:33:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:05.106 23:33:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:05.106 23:33:44 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:05.106 23:33:44 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:05.106 23:33:44 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:05.106 23:33:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:05.106 "name": "raid_bdev1", 00:16:05.106 "uuid": "45d28894-8b63-4342-bfd3-aa95ce9ded49", 00:16:05.106 "strip_size_kb": 0, 00:16:05.106 "state": "online", 00:16:05.106 "raid_level": "raid1", 00:16:05.106 "superblock": true, 00:16:05.106 "num_base_bdevs": 2, 00:16:05.106 "num_base_bdevs_discovered": 1, 00:16:05.106 "num_base_bdevs_operational": 1, 00:16:05.106 "base_bdevs_list": [ 00:16:05.106 { 00:16:05.106 "name": null, 00:16:05.106 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:05.106 "is_configured": false, 00:16:05.106 "data_offset": 256, 00:16:05.106 "data_size": 7936 00:16:05.106 }, 00:16:05.106 { 00:16:05.106 "name": "pt2", 00:16:05.106 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:05.106 "is_configured": true, 00:16:05.106 "data_offset": 256, 00:16:05.106 "data_size": 7936 00:16:05.106 } 00:16:05.106 ] 00:16:05.106 }' 00:16:05.106 23:33:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:05.106 23:33:44 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:05.366 23:33:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:16:05.366 23:33:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:16:05.366 23:33:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:05.366 23:33:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:05.626 23:33:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:05.626 23:33:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:16:05.626 23:33:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:05.626 23:33:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:16:05.626 23:33:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:05.626 23:33:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:05.626 [2024-09-30 23:33:45.254422] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:05.626 23:33:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:05.626 23:33:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # '[' 45d28894-8b63-4342-bfd3-aa95ce9ded49 '!=' 45d28894-8b63-4342-bfd3-aa95ce9ded49 ']' 00:16:05.626 23:33:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@563 -- # killprocess 97835 00:16:05.626 23:33:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@950 -- # '[' -z 97835 ']' 00:16:05.626 23:33:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@954 -- # kill -0 97835 00:16:05.626 23:33:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@955 -- # uname 00:16:05.626 23:33:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:05.626 23:33:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 97835 00:16:05.626 23:33:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:05.626 23:33:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:05.626 killing process with pid 97835 00:16:05.626 23:33:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@968 -- # echo 'killing process with pid 97835' 00:16:05.626 23:33:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@969 -- # kill 97835 00:16:05.626 [2024-09-30 23:33:45.336610] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:05.626 [2024-09-30 23:33:45.336672] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:05.626 [2024-09-30 23:33:45.336708] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:05.626 [2024-09-30 23:33:45.336716] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name raid_bdev1, state offline 00:16:05.626 23:33:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@974 -- # wait 97835 00:16:05.626 [2024-09-30 23:33:45.381231] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:06.196 23:33:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@565 -- # return 0 00:16:06.196 00:16:06.196 real 0m5.035s 00:16:06.196 user 0m7.982s 00:16:06.196 sys 0m1.144s 00:16:06.196 23:33:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:06.196 23:33:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:06.196 ************************************ 00:16:06.196 END TEST raid_superblock_test_md_separate 00:16:06.196 ************************************ 00:16:06.196 23:33:45 bdev_raid -- bdev/bdev_raid.sh@1006 -- # '[' true = true ']' 00:16:06.196 23:33:45 bdev_raid -- bdev/bdev_raid.sh@1007 -- # run_test raid_rebuild_test_sb_md_separate raid_rebuild_test raid1 2 true false true 00:16:06.196 23:33:45 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:16:06.196 23:33:45 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:06.196 23:33:45 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:06.196 ************************************ 00:16:06.196 START TEST raid_rebuild_test_sb_md_separate 00:16:06.196 ************************************ 00:16:06.196 23:33:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 2 true false true 00:16:06.196 23:33:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:16:06.196 23:33:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:16:06.196 23:33:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:16:06.196 23:33:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:16:06.197 23:33:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@573 -- # local verify=true 00:16:06.197 23:33:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:16:06.197 23:33:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:06.197 23:33:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:16:06.197 23:33:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:06.197 23:33:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:06.197 23:33:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:16:06.197 23:33:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:06.197 23:33:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:06.197 23:33:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:16:06.197 23:33:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:16:06.197 23:33:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:16:06.197 23:33:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # local strip_size 00:16:06.197 23:33:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@577 -- # local create_arg 00:16:06.197 23:33:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:16:06.197 23:33:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@579 -- # local data_offset 00:16:06.197 23:33:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:16:06.197 23:33:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:16:06.197 23:33:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:16:06.197 23:33:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:16:06.197 23:33:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@597 -- # raid_pid=98154 00:16:06.197 23:33:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:16:06.197 23:33:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@598 -- # waitforlisten 98154 00:16:06.197 23:33:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@831 -- # '[' -z 98154 ']' 00:16:06.197 23:33:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:06.197 23:33:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:06.197 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:06.197 23:33:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:06.197 23:33:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:06.197 23:33:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:06.197 I/O size of 3145728 is greater than zero copy threshold (65536). 00:16:06.197 Zero copy mechanism will not be used. 00:16:06.197 [2024-09-30 23:33:45.931372] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:16:06.197 [2024-09-30 23:33:45.931491] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid98154 ] 00:16:06.457 [2024-09-30 23:33:46.097202] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:06.457 [2024-09-30 23:33:46.171965] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:16:06.457 [2024-09-30 23:33:46.251167] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:06.457 [2024-09-30 23:33:46.251205] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:07.026 23:33:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:07.026 23:33:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@864 -- # return 0 00:16:07.026 23:33:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:07.026 23:33:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev1_malloc 00:16:07.026 23:33:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:07.026 23:33:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:07.026 BaseBdev1_malloc 00:16:07.026 23:33:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:07.026 23:33:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:16:07.026 23:33:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:07.026 23:33:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:07.026 [2024-09-30 23:33:46.760365] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:16:07.026 [2024-09-30 23:33:46.760454] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:07.026 [2024-09-30 23:33:46.760485] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:16:07.026 [2024-09-30 23:33:46.760495] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:07.026 [2024-09-30 23:33:46.762694] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:07.026 [2024-09-30 23:33:46.762732] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:07.026 BaseBdev1 00:16:07.027 23:33:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:07.027 23:33:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:07.027 23:33:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev2_malloc 00:16:07.027 23:33:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:07.027 23:33:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:07.027 BaseBdev2_malloc 00:16:07.027 23:33:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:07.027 23:33:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:16:07.027 23:33:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:07.027 23:33:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:07.027 [2024-09-30 23:33:46.813606] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:16:07.027 [2024-09-30 23:33:46.813713] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:07.027 [2024-09-30 23:33:46.813762] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:16:07.027 [2024-09-30 23:33:46.813784] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:07.027 [2024-09-30 23:33:46.818019] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:07.027 [2024-09-30 23:33:46.818069] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:16:07.027 BaseBdev2 00:16:07.027 23:33:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:07.027 23:33:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b spare_malloc 00:16:07.027 23:33:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:07.027 23:33:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:07.027 spare_malloc 00:16:07.027 23:33:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:07.027 23:33:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:16:07.027 23:33:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:07.027 23:33:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:07.027 spare_delay 00:16:07.027 23:33:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:07.027 23:33:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:07.027 23:33:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:07.027 23:33:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:07.027 [2024-09-30 23:33:46.863953] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:07.027 [2024-09-30 23:33:46.864013] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:07.027 [2024-09-30 23:33:46.864037] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:16:07.027 [2024-09-30 23:33:46.864049] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:07.027 [2024-09-30 23:33:46.866080] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:07.027 [2024-09-30 23:33:46.866114] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:07.027 spare 00:16:07.027 23:33:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:07.027 23:33:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:16:07.027 23:33:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:07.027 23:33:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:07.027 [2024-09-30 23:33:46.875971] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:07.027 [2024-09-30 23:33:46.878134] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:07.027 [2024-09-30 23:33:46.878315] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:16:07.027 [2024-09-30 23:33:46.878328] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:16:07.027 [2024-09-30 23:33:46.878415] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:16:07.027 [2024-09-30 23:33:46.878531] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:16:07.027 [2024-09-30 23:33:46.878556] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:16:07.027 [2024-09-30 23:33:46.878649] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:07.287 23:33:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:07.287 23:33:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:07.287 23:33:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:07.287 23:33:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:07.287 23:33:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:07.287 23:33:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:07.287 23:33:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:07.287 23:33:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:07.287 23:33:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:07.287 23:33:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:07.287 23:33:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:07.287 23:33:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:07.287 23:33:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:07.287 23:33:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:07.287 23:33:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:07.287 23:33:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:07.287 23:33:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:07.287 "name": "raid_bdev1", 00:16:07.287 "uuid": "6d2731ec-6e30-4a89-ad16-f58291be5ef3", 00:16:07.287 "strip_size_kb": 0, 00:16:07.287 "state": "online", 00:16:07.287 "raid_level": "raid1", 00:16:07.287 "superblock": true, 00:16:07.287 "num_base_bdevs": 2, 00:16:07.287 "num_base_bdevs_discovered": 2, 00:16:07.287 "num_base_bdevs_operational": 2, 00:16:07.287 "base_bdevs_list": [ 00:16:07.287 { 00:16:07.287 "name": "BaseBdev1", 00:16:07.287 "uuid": "507f0e58-18bd-5f8f-8afa-3b07ed222d86", 00:16:07.287 "is_configured": true, 00:16:07.287 "data_offset": 256, 00:16:07.287 "data_size": 7936 00:16:07.287 }, 00:16:07.287 { 00:16:07.287 "name": "BaseBdev2", 00:16:07.287 "uuid": "03dc56d9-629e-5542-aa0c-e2abc8773c53", 00:16:07.287 "is_configured": true, 00:16:07.287 "data_offset": 256, 00:16:07.287 "data_size": 7936 00:16:07.287 } 00:16:07.287 ] 00:16:07.287 }' 00:16:07.287 23:33:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:07.287 23:33:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:07.546 23:33:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:07.546 23:33:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:16:07.546 23:33:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:07.546 23:33:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:07.546 [2024-09-30 23:33:47.323950] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:07.546 23:33:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:07.546 23:33:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:16:07.546 23:33:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:07.546 23:33:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:16:07.546 23:33:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:07.546 23:33:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:07.546 23:33:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:07.546 23:33:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:16:07.546 23:33:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:16:07.546 23:33:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:16:07.546 23:33:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:16:07.546 23:33:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:16:07.546 23:33:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:07.546 23:33:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:16:07.546 23:33:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:07.546 23:33:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:16:07.546 23:33:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:07.546 23:33:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@12 -- # local i 00:16:07.546 23:33:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:07.546 23:33:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:07.546 23:33:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:16:07.806 [2024-09-30 23:33:47.563310] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:16:07.806 /dev/nbd0 00:16:07.806 23:33:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:07.806 23:33:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:07.806 23:33:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:16:07.806 23:33:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@869 -- # local i 00:16:07.806 23:33:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:16:07.806 23:33:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:16:07.806 23:33:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:16:07.806 23:33:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # break 00:16:07.806 23:33:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:16:07.806 23:33:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:16:07.806 23:33:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:07.806 1+0 records in 00:16:07.806 1+0 records out 00:16:07.806 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000433056 s, 9.5 MB/s 00:16:07.806 23:33:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:07.806 23:33:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # size=4096 00:16:07.806 23:33:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:07.806 23:33:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:16:07.806 23:33:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # return 0 00:16:07.806 23:33:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:07.806 23:33:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:07.806 23:33:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:16:07.806 23:33:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:16:07.806 23:33:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=4096 count=7936 oflag=direct 00:16:08.778 7936+0 records in 00:16:08.778 7936+0 records out 00:16:08.778 32505856 bytes (33 MB, 31 MiB) copied, 0.583718 s, 55.7 MB/s 00:16:08.778 23:33:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:16:08.778 23:33:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:08.778 23:33:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:16:08.778 23:33:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:08.778 23:33:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@51 -- # local i 00:16:08.778 23:33:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:08.778 23:33:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:16:08.778 [2024-09-30 23:33:48.435959] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:08.778 23:33:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:08.778 23:33:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:08.778 23:33:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:08.778 23:33:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:08.778 23:33:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:08.778 23:33:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:08.778 23:33:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:16:08.778 23:33:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:16:08.778 23:33:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:16:08.778 23:33:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:08.778 23:33:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:08.778 [2024-09-30 23:33:48.464005] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:08.778 23:33:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:08.778 23:33:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:08.778 23:33:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:08.778 23:33:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:08.778 23:33:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:08.779 23:33:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:08.779 23:33:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:08.779 23:33:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:08.779 23:33:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:08.779 23:33:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:08.779 23:33:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:08.779 23:33:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:08.779 23:33:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:08.779 23:33:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:08.779 23:33:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:08.779 23:33:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:08.779 23:33:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:08.779 "name": "raid_bdev1", 00:16:08.779 "uuid": "6d2731ec-6e30-4a89-ad16-f58291be5ef3", 00:16:08.779 "strip_size_kb": 0, 00:16:08.779 "state": "online", 00:16:08.779 "raid_level": "raid1", 00:16:08.779 "superblock": true, 00:16:08.779 "num_base_bdevs": 2, 00:16:08.779 "num_base_bdevs_discovered": 1, 00:16:08.779 "num_base_bdevs_operational": 1, 00:16:08.779 "base_bdevs_list": [ 00:16:08.779 { 00:16:08.779 "name": null, 00:16:08.779 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:08.779 "is_configured": false, 00:16:08.779 "data_offset": 0, 00:16:08.779 "data_size": 7936 00:16:08.779 }, 00:16:08.779 { 00:16:08.779 "name": "BaseBdev2", 00:16:08.779 "uuid": "03dc56d9-629e-5542-aa0c-e2abc8773c53", 00:16:08.779 "is_configured": true, 00:16:08.779 "data_offset": 256, 00:16:08.779 "data_size": 7936 00:16:08.779 } 00:16:08.779 ] 00:16:08.779 }' 00:16:08.779 23:33:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:08.779 23:33:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:09.346 23:33:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:09.346 23:33:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:09.346 23:33:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:09.346 [2024-09-30 23:33:48.927290] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:09.346 [2024-09-30 23:33:48.930075] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d0c0 00:16:09.346 [2024-09-30 23:33:48.932139] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:09.346 23:33:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:09.346 23:33:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@647 -- # sleep 1 00:16:10.283 23:33:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:10.283 23:33:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:10.283 23:33:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:10.283 23:33:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:10.283 23:33:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:10.283 23:33:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:10.283 23:33:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:10.283 23:33:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:10.283 23:33:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:10.283 23:33:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:10.283 23:33:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:10.283 "name": "raid_bdev1", 00:16:10.283 "uuid": "6d2731ec-6e30-4a89-ad16-f58291be5ef3", 00:16:10.283 "strip_size_kb": 0, 00:16:10.283 "state": "online", 00:16:10.283 "raid_level": "raid1", 00:16:10.283 "superblock": true, 00:16:10.283 "num_base_bdevs": 2, 00:16:10.283 "num_base_bdevs_discovered": 2, 00:16:10.283 "num_base_bdevs_operational": 2, 00:16:10.283 "process": { 00:16:10.283 "type": "rebuild", 00:16:10.283 "target": "spare", 00:16:10.283 "progress": { 00:16:10.283 "blocks": 2560, 00:16:10.283 "percent": 32 00:16:10.283 } 00:16:10.283 }, 00:16:10.283 "base_bdevs_list": [ 00:16:10.283 { 00:16:10.283 "name": "spare", 00:16:10.283 "uuid": "743e4c0b-6661-54a8-9cef-0acc11cf5ccc", 00:16:10.283 "is_configured": true, 00:16:10.283 "data_offset": 256, 00:16:10.283 "data_size": 7936 00:16:10.283 }, 00:16:10.283 { 00:16:10.283 "name": "BaseBdev2", 00:16:10.283 "uuid": "03dc56d9-629e-5542-aa0c-e2abc8773c53", 00:16:10.283 "is_configured": true, 00:16:10.283 "data_offset": 256, 00:16:10.283 "data_size": 7936 00:16:10.283 } 00:16:10.283 ] 00:16:10.283 }' 00:16:10.283 23:33:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:10.283 23:33:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:10.283 23:33:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:10.283 23:33:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:10.283 23:33:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:16:10.283 23:33:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:10.283 23:33:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:10.283 [2024-09-30 23:33:50.095660] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:10.542 [2024-09-30 23:33:50.140401] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:10.542 [2024-09-30 23:33:50.140464] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:10.542 [2024-09-30 23:33:50.140485] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:10.542 [2024-09-30 23:33:50.140492] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:10.542 23:33:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:10.542 23:33:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:10.542 23:33:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:10.542 23:33:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:10.542 23:33:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:10.542 23:33:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:10.542 23:33:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:10.542 23:33:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:10.542 23:33:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:10.542 23:33:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:10.542 23:33:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:10.542 23:33:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:10.542 23:33:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:10.542 23:33:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:10.542 23:33:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:10.542 23:33:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:10.542 23:33:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:10.542 "name": "raid_bdev1", 00:16:10.542 "uuid": "6d2731ec-6e30-4a89-ad16-f58291be5ef3", 00:16:10.542 "strip_size_kb": 0, 00:16:10.542 "state": "online", 00:16:10.542 "raid_level": "raid1", 00:16:10.542 "superblock": true, 00:16:10.542 "num_base_bdevs": 2, 00:16:10.542 "num_base_bdevs_discovered": 1, 00:16:10.542 "num_base_bdevs_operational": 1, 00:16:10.542 "base_bdevs_list": [ 00:16:10.542 { 00:16:10.542 "name": null, 00:16:10.542 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:10.542 "is_configured": false, 00:16:10.542 "data_offset": 0, 00:16:10.542 "data_size": 7936 00:16:10.542 }, 00:16:10.542 { 00:16:10.542 "name": "BaseBdev2", 00:16:10.542 "uuid": "03dc56d9-629e-5542-aa0c-e2abc8773c53", 00:16:10.542 "is_configured": true, 00:16:10.542 "data_offset": 256, 00:16:10.542 "data_size": 7936 00:16:10.542 } 00:16:10.542 ] 00:16:10.542 }' 00:16:10.542 23:33:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:10.542 23:33:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:10.801 23:33:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:10.801 23:33:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:10.801 23:33:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:10.801 23:33:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:10.801 23:33:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:10.801 23:33:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:10.801 23:33:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:10.801 23:33:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:10.801 23:33:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:10.801 23:33:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:10.801 23:33:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:10.801 "name": "raid_bdev1", 00:16:10.801 "uuid": "6d2731ec-6e30-4a89-ad16-f58291be5ef3", 00:16:10.801 "strip_size_kb": 0, 00:16:10.801 "state": "online", 00:16:10.801 "raid_level": "raid1", 00:16:10.801 "superblock": true, 00:16:10.801 "num_base_bdevs": 2, 00:16:10.801 "num_base_bdevs_discovered": 1, 00:16:10.801 "num_base_bdevs_operational": 1, 00:16:10.801 "base_bdevs_list": [ 00:16:10.801 { 00:16:10.801 "name": null, 00:16:10.801 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:10.801 "is_configured": false, 00:16:10.801 "data_offset": 0, 00:16:10.801 "data_size": 7936 00:16:10.801 }, 00:16:10.801 { 00:16:10.801 "name": "BaseBdev2", 00:16:10.801 "uuid": "03dc56d9-629e-5542-aa0c-e2abc8773c53", 00:16:10.801 "is_configured": true, 00:16:10.801 "data_offset": 256, 00:16:10.801 "data_size": 7936 00:16:10.801 } 00:16:10.801 ] 00:16:10.801 }' 00:16:10.801 23:33:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:11.060 23:33:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:11.060 23:33:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:11.060 23:33:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:11.060 23:33:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:11.060 23:33:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:11.060 23:33:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:11.060 [2024-09-30 23:33:50.715702] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:11.060 [2024-09-30 23:33:50.717892] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d190 00:16:11.060 [2024-09-30 23:33:50.719966] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:11.060 23:33:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:11.060 23:33:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@663 -- # sleep 1 00:16:11.997 23:33:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:11.997 23:33:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:11.997 23:33:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:11.997 23:33:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:11.997 23:33:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:11.997 23:33:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:11.997 23:33:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:11.997 23:33:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:11.997 23:33:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:11.997 23:33:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:11.997 23:33:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:11.997 "name": "raid_bdev1", 00:16:11.997 "uuid": "6d2731ec-6e30-4a89-ad16-f58291be5ef3", 00:16:11.997 "strip_size_kb": 0, 00:16:11.997 "state": "online", 00:16:11.997 "raid_level": "raid1", 00:16:11.997 "superblock": true, 00:16:11.997 "num_base_bdevs": 2, 00:16:11.997 "num_base_bdevs_discovered": 2, 00:16:11.997 "num_base_bdevs_operational": 2, 00:16:11.997 "process": { 00:16:11.997 "type": "rebuild", 00:16:11.997 "target": "spare", 00:16:11.997 "progress": { 00:16:11.997 "blocks": 2560, 00:16:11.997 "percent": 32 00:16:11.997 } 00:16:11.997 }, 00:16:11.997 "base_bdevs_list": [ 00:16:11.997 { 00:16:11.997 "name": "spare", 00:16:11.997 "uuid": "743e4c0b-6661-54a8-9cef-0acc11cf5ccc", 00:16:11.997 "is_configured": true, 00:16:11.997 "data_offset": 256, 00:16:11.997 "data_size": 7936 00:16:11.997 }, 00:16:11.997 { 00:16:11.997 "name": "BaseBdev2", 00:16:11.997 "uuid": "03dc56d9-629e-5542-aa0c-e2abc8773c53", 00:16:11.997 "is_configured": true, 00:16:11.997 "data_offset": 256, 00:16:11.997 "data_size": 7936 00:16:11.997 } 00:16:11.997 ] 00:16:11.997 }' 00:16:11.997 23:33:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:11.997 23:33:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:11.997 23:33:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:12.256 23:33:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:12.256 23:33:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:16:12.256 23:33:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:16:12.256 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:16:12.256 23:33:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:16:12.256 23:33:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:16:12.256 23:33:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:16:12.256 23:33:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@706 -- # local timeout=592 00:16:12.256 23:33:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:12.256 23:33:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:12.256 23:33:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:12.256 23:33:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:12.256 23:33:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:12.256 23:33:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:12.256 23:33:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:12.256 23:33:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:12.256 23:33:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:12.256 23:33:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:12.256 23:33:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:12.256 23:33:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:12.256 "name": "raid_bdev1", 00:16:12.256 "uuid": "6d2731ec-6e30-4a89-ad16-f58291be5ef3", 00:16:12.256 "strip_size_kb": 0, 00:16:12.256 "state": "online", 00:16:12.256 "raid_level": "raid1", 00:16:12.256 "superblock": true, 00:16:12.256 "num_base_bdevs": 2, 00:16:12.256 "num_base_bdevs_discovered": 2, 00:16:12.256 "num_base_bdevs_operational": 2, 00:16:12.256 "process": { 00:16:12.256 "type": "rebuild", 00:16:12.256 "target": "spare", 00:16:12.256 "progress": { 00:16:12.256 "blocks": 2816, 00:16:12.256 "percent": 35 00:16:12.256 } 00:16:12.256 }, 00:16:12.256 "base_bdevs_list": [ 00:16:12.256 { 00:16:12.256 "name": "spare", 00:16:12.256 "uuid": "743e4c0b-6661-54a8-9cef-0acc11cf5ccc", 00:16:12.256 "is_configured": true, 00:16:12.256 "data_offset": 256, 00:16:12.256 "data_size": 7936 00:16:12.256 }, 00:16:12.256 { 00:16:12.256 "name": "BaseBdev2", 00:16:12.256 "uuid": "03dc56d9-629e-5542-aa0c-e2abc8773c53", 00:16:12.256 "is_configured": true, 00:16:12.256 "data_offset": 256, 00:16:12.256 "data_size": 7936 00:16:12.256 } 00:16:12.256 ] 00:16:12.256 }' 00:16:12.256 23:33:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:12.256 23:33:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:12.256 23:33:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:12.256 23:33:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:12.256 23:33:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:13.192 23:33:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:13.193 23:33:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:13.193 23:33:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:13.193 23:33:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:13.193 23:33:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:13.193 23:33:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:13.193 23:33:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:13.193 23:33:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:13.193 23:33:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:13.193 23:33:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:13.193 23:33:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:13.193 23:33:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:13.193 "name": "raid_bdev1", 00:16:13.193 "uuid": "6d2731ec-6e30-4a89-ad16-f58291be5ef3", 00:16:13.193 "strip_size_kb": 0, 00:16:13.193 "state": "online", 00:16:13.193 "raid_level": "raid1", 00:16:13.193 "superblock": true, 00:16:13.193 "num_base_bdevs": 2, 00:16:13.193 "num_base_bdevs_discovered": 2, 00:16:13.193 "num_base_bdevs_operational": 2, 00:16:13.193 "process": { 00:16:13.193 "type": "rebuild", 00:16:13.193 "target": "spare", 00:16:13.193 "progress": { 00:16:13.193 "blocks": 5632, 00:16:13.193 "percent": 70 00:16:13.193 } 00:16:13.193 }, 00:16:13.193 "base_bdevs_list": [ 00:16:13.193 { 00:16:13.193 "name": "spare", 00:16:13.193 "uuid": "743e4c0b-6661-54a8-9cef-0acc11cf5ccc", 00:16:13.193 "is_configured": true, 00:16:13.193 "data_offset": 256, 00:16:13.193 "data_size": 7936 00:16:13.193 }, 00:16:13.193 { 00:16:13.193 "name": "BaseBdev2", 00:16:13.193 "uuid": "03dc56d9-629e-5542-aa0c-e2abc8773c53", 00:16:13.193 "is_configured": true, 00:16:13.193 "data_offset": 256, 00:16:13.193 "data_size": 7936 00:16:13.193 } 00:16:13.193 ] 00:16:13.193 }' 00:16:13.193 23:33:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:13.451 23:33:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:13.451 23:33:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:13.451 23:33:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:13.451 23:33:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:14.018 [2024-09-30 23:33:53.839387] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:16:14.018 [2024-09-30 23:33:53.839469] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:16:14.018 [2024-09-30 23:33:53.839600] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:14.276 23:33:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:14.276 23:33:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:14.276 23:33:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:14.276 23:33:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:14.276 23:33:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:14.276 23:33:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:14.276 23:33:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:14.277 23:33:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:14.277 23:33:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:14.277 23:33:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:14.277 23:33:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:14.534 23:33:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:14.534 "name": "raid_bdev1", 00:16:14.534 "uuid": "6d2731ec-6e30-4a89-ad16-f58291be5ef3", 00:16:14.534 "strip_size_kb": 0, 00:16:14.534 "state": "online", 00:16:14.534 "raid_level": "raid1", 00:16:14.534 "superblock": true, 00:16:14.534 "num_base_bdevs": 2, 00:16:14.534 "num_base_bdevs_discovered": 2, 00:16:14.534 "num_base_bdevs_operational": 2, 00:16:14.534 "base_bdevs_list": [ 00:16:14.534 { 00:16:14.534 "name": "spare", 00:16:14.534 "uuid": "743e4c0b-6661-54a8-9cef-0acc11cf5ccc", 00:16:14.534 "is_configured": true, 00:16:14.534 "data_offset": 256, 00:16:14.534 "data_size": 7936 00:16:14.534 }, 00:16:14.534 { 00:16:14.534 "name": "BaseBdev2", 00:16:14.534 "uuid": "03dc56d9-629e-5542-aa0c-e2abc8773c53", 00:16:14.534 "is_configured": true, 00:16:14.534 "data_offset": 256, 00:16:14.534 "data_size": 7936 00:16:14.534 } 00:16:14.534 ] 00:16:14.534 }' 00:16:14.534 23:33:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:14.534 23:33:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:16:14.534 23:33:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:14.534 23:33:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:16:14.534 23:33:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@709 -- # break 00:16:14.534 23:33:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:14.534 23:33:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:14.534 23:33:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:14.534 23:33:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:14.534 23:33:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:14.534 23:33:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:14.534 23:33:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:14.534 23:33:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:14.534 23:33:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:14.534 23:33:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:14.534 23:33:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:14.534 "name": "raid_bdev1", 00:16:14.534 "uuid": "6d2731ec-6e30-4a89-ad16-f58291be5ef3", 00:16:14.534 "strip_size_kb": 0, 00:16:14.534 "state": "online", 00:16:14.534 "raid_level": "raid1", 00:16:14.534 "superblock": true, 00:16:14.534 "num_base_bdevs": 2, 00:16:14.534 "num_base_bdevs_discovered": 2, 00:16:14.534 "num_base_bdevs_operational": 2, 00:16:14.534 "base_bdevs_list": [ 00:16:14.534 { 00:16:14.534 "name": "spare", 00:16:14.534 "uuid": "743e4c0b-6661-54a8-9cef-0acc11cf5ccc", 00:16:14.534 "is_configured": true, 00:16:14.534 "data_offset": 256, 00:16:14.534 "data_size": 7936 00:16:14.534 }, 00:16:14.534 { 00:16:14.534 "name": "BaseBdev2", 00:16:14.534 "uuid": "03dc56d9-629e-5542-aa0c-e2abc8773c53", 00:16:14.534 "is_configured": true, 00:16:14.534 "data_offset": 256, 00:16:14.534 "data_size": 7936 00:16:14.534 } 00:16:14.534 ] 00:16:14.534 }' 00:16:14.534 23:33:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:14.534 23:33:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:14.534 23:33:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:14.792 23:33:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:14.792 23:33:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:14.792 23:33:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:14.792 23:33:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:14.792 23:33:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:14.792 23:33:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:14.792 23:33:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:14.792 23:33:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:14.792 23:33:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:14.792 23:33:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:14.792 23:33:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:14.792 23:33:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:14.792 23:33:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:14.792 23:33:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:14.792 23:33:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:14.792 23:33:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:14.792 23:33:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:14.792 "name": "raid_bdev1", 00:16:14.792 "uuid": "6d2731ec-6e30-4a89-ad16-f58291be5ef3", 00:16:14.792 "strip_size_kb": 0, 00:16:14.792 "state": "online", 00:16:14.792 "raid_level": "raid1", 00:16:14.792 "superblock": true, 00:16:14.792 "num_base_bdevs": 2, 00:16:14.792 "num_base_bdevs_discovered": 2, 00:16:14.792 "num_base_bdevs_operational": 2, 00:16:14.792 "base_bdevs_list": [ 00:16:14.792 { 00:16:14.792 "name": "spare", 00:16:14.792 "uuid": "743e4c0b-6661-54a8-9cef-0acc11cf5ccc", 00:16:14.792 "is_configured": true, 00:16:14.792 "data_offset": 256, 00:16:14.792 "data_size": 7936 00:16:14.792 }, 00:16:14.792 { 00:16:14.792 "name": "BaseBdev2", 00:16:14.792 "uuid": "03dc56d9-629e-5542-aa0c-e2abc8773c53", 00:16:14.792 "is_configured": true, 00:16:14.792 "data_offset": 256, 00:16:14.792 "data_size": 7936 00:16:14.792 } 00:16:14.792 ] 00:16:14.792 }' 00:16:14.792 23:33:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:14.792 23:33:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:15.051 23:33:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:15.051 23:33:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:15.051 23:33:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:15.051 [2024-09-30 23:33:54.817739] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:15.051 [2024-09-30 23:33:54.817771] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:15.051 [2024-09-30 23:33:54.817884] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:15.051 [2024-09-30 23:33:54.817964] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:15.051 [2024-09-30 23:33:54.817978] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:16:15.051 23:33:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:15.051 23:33:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:15.051 23:33:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:15.051 23:33:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:15.051 23:33:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # jq length 00:16:15.051 23:33:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:15.051 23:33:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:16:15.051 23:33:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:16:15.051 23:33:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:16:15.051 23:33:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:16:15.051 23:33:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:15.051 23:33:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:16:15.051 23:33:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:15.051 23:33:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:15.051 23:33:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:15.051 23:33:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@12 -- # local i 00:16:15.051 23:33:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:15.051 23:33:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:15.051 23:33:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:16:15.310 /dev/nbd0 00:16:15.310 23:33:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:15.310 23:33:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:15.310 23:33:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:16:15.310 23:33:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@869 -- # local i 00:16:15.310 23:33:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:16:15.310 23:33:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:16:15.310 23:33:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:16:15.310 23:33:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # break 00:16:15.310 23:33:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:16:15.310 23:33:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:16:15.310 23:33:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:15.310 1+0 records in 00:16:15.310 1+0 records out 00:16:15.310 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000249754 s, 16.4 MB/s 00:16:15.310 23:33:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:15.310 23:33:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # size=4096 00:16:15.310 23:33:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:15.310 23:33:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:16:15.310 23:33:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # return 0 00:16:15.310 23:33:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:15.310 23:33:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:15.310 23:33:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:16:15.569 /dev/nbd1 00:16:15.569 23:33:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:16:15.569 23:33:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:16:15.569 23:33:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:16:15.569 23:33:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@869 -- # local i 00:16:15.569 23:33:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:16:15.569 23:33:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:16:15.569 23:33:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:16:15.569 23:33:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # break 00:16:15.569 23:33:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:16:15.569 23:33:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:16:15.569 23:33:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:15.569 1+0 records in 00:16:15.569 1+0 records out 00:16:15.569 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000388102 s, 10.6 MB/s 00:16:15.569 23:33:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:15.569 23:33:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # size=4096 00:16:15.569 23:33:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:15.569 23:33:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:16:15.569 23:33:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # return 0 00:16:15.569 23:33:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:15.569 23:33:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:15.569 23:33:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:16:15.829 23:33:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:16:15.829 23:33:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:15.829 23:33:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:15.829 23:33:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:15.829 23:33:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@51 -- # local i 00:16:15.829 23:33:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:15.829 23:33:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:16:15.829 23:33:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:15.829 23:33:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:15.829 23:33:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:15.829 23:33:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:15.829 23:33:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:15.829 23:33:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:15.829 23:33:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:16:15.829 23:33:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:16:15.829 23:33:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:15.829 23:33:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:16:16.099 23:33:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:16:16.099 23:33:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:16:16.100 23:33:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:16:16.100 23:33:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:16.100 23:33:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:16.100 23:33:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:16:16.100 23:33:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:16:16.100 23:33:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:16:16.100 23:33:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:16:16.100 23:33:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:16:16.100 23:33:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:16.100 23:33:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:16.100 23:33:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:16.100 23:33:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:16.100 23:33:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:16.100 23:33:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:16.100 [2024-09-30 23:33:55.884425] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:16.100 [2024-09-30 23:33:55.884506] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:16.100 [2024-09-30 23:33:55.884529] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:16:16.100 [2024-09-30 23:33:55.884543] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:16.100 [2024-09-30 23:33:55.886825] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:16.100 [2024-09-30 23:33:55.886894] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:16.100 [2024-09-30 23:33:55.886955] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:16:16.100 [2024-09-30 23:33:55.887011] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:16.100 [2024-09-30 23:33:55.887127] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:16.100 spare 00:16:16.100 23:33:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:16.100 23:33:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:16:16.100 23:33:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:16.100 23:33:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:16.376 [2024-09-30 23:33:55.987026] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006600 00:16:16.376 [2024-09-30 23:33:55.987061] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:16:16.377 [2024-09-30 23:33:55.987194] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c19b0 00:16:16.377 [2024-09-30 23:33:55.987310] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006600 00:16:16.377 [2024-09-30 23:33:55.987322] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006600 00:16:16.377 [2024-09-30 23:33:55.987426] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:16.377 23:33:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:16.377 23:33:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:16.377 23:33:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:16.377 23:33:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:16.377 23:33:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:16.377 23:33:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:16.377 23:33:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:16.377 23:33:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:16.377 23:33:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:16.377 23:33:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:16.377 23:33:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:16.377 23:33:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:16.377 23:33:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:16.377 23:33:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:16.377 23:33:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:16.377 23:33:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:16.377 23:33:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:16.377 "name": "raid_bdev1", 00:16:16.377 "uuid": "6d2731ec-6e30-4a89-ad16-f58291be5ef3", 00:16:16.377 "strip_size_kb": 0, 00:16:16.377 "state": "online", 00:16:16.377 "raid_level": "raid1", 00:16:16.377 "superblock": true, 00:16:16.377 "num_base_bdevs": 2, 00:16:16.377 "num_base_bdevs_discovered": 2, 00:16:16.377 "num_base_bdevs_operational": 2, 00:16:16.377 "base_bdevs_list": [ 00:16:16.377 { 00:16:16.377 "name": "spare", 00:16:16.377 "uuid": "743e4c0b-6661-54a8-9cef-0acc11cf5ccc", 00:16:16.377 "is_configured": true, 00:16:16.377 "data_offset": 256, 00:16:16.377 "data_size": 7936 00:16:16.377 }, 00:16:16.377 { 00:16:16.377 "name": "BaseBdev2", 00:16:16.377 "uuid": "03dc56d9-629e-5542-aa0c-e2abc8773c53", 00:16:16.377 "is_configured": true, 00:16:16.377 "data_offset": 256, 00:16:16.377 "data_size": 7936 00:16:16.377 } 00:16:16.377 ] 00:16:16.377 }' 00:16:16.377 23:33:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:16.377 23:33:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:16.636 23:33:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:16.636 23:33:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:16.636 23:33:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:16.636 23:33:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:16.636 23:33:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:16.636 23:33:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:16.636 23:33:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:16.636 23:33:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:16.636 23:33:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:16.636 23:33:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:16.895 23:33:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:16.895 "name": "raid_bdev1", 00:16:16.895 "uuid": "6d2731ec-6e30-4a89-ad16-f58291be5ef3", 00:16:16.895 "strip_size_kb": 0, 00:16:16.895 "state": "online", 00:16:16.895 "raid_level": "raid1", 00:16:16.895 "superblock": true, 00:16:16.895 "num_base_bdevs": 2, 00:16:16.895 "num_base_bdevs_discovered": 2, 00:16:16.895 "num_base_bdevs_operational": 2, 00:16:16.895 "base_bdevs_list": [ 00:16:16.895 { 00:16:16.895 "name": "spare", 00:16:16.895 "uuid": "743e4c0b-6661-54a8-9cef-0acc11cf5ccc", 00:16:16.895 "is_configured": true, 00:16:16.895 "data_offset": 256, 00:16:16.895 "data_size": 7936 00:16:16.895 }, 00:16:16.895 { 00:16:16.895 "name": "BaseBdev2", 00:16:16.895 "uuid": "03dc56d9-629e-5542-aa0c-e2abc8773c53", 00:16:16.895 "is_configured": true, 00:16:16.895 "data_offset": 256, 00:16:16.895 "data_size": 7936 00:16:16.895 } 00:16:16.895 ] 00:16:16.895 }' 00:16:16.895 23:33:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:16.895 23:33:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:16.895 23:33:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:16.895 23:33:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:16.895 23:33:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:16:16.895 23:33:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:16.895 23:33:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:16.895 23:33:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:16.895 23:33:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:16.895 23:33:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:16:16.895 23:33:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:16:16.895 23:33:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:16.895 23:33:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:16.895 [2024-09-30 23:33:56.643248] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:16.895 23:33:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:16.895 23:33:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:16.895 23:33:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:16.895 23:33:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:16.895 23:33:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:16.895 23:33:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:16.895 23:33:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:16.895 23:33:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:16.895 23:33:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:16.895 23:33:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:16.895 23:33:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:16.895 23:33:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:16.895 23:33:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:16.895 23:33:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:16.895 23:33:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:16.895 23:33:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:16.895 23:33:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:16.895 "name": "raid_bdev1", 00:16:16.895 "uuid": "6d2731ec-6e30-4a89-ad16-f58291be5ef3", 00:16:16.895 "strip_size_kb": 0, 00:16:16.895 "state": "online", 00:16:16.895 "raid_level": "raid1", 00:16:16.895 "superblock": true, 00:16:16.895 "num_base_bdevs": 2, 00:16:16.895 "num_base_bdevs_discovered": 1, 00:16:16.895 "num_base_bdevs_operational": 1, 00:16:16.896 "base_bdevs_list": [ 00:16:16.896 { 00:16:16.896 "name": null, 00:16:16.896 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:16.896 "is_configured": false, 00:16:16.896 "data_offset": 0, 00:16:16.896 "data_size": 7936 00:16:16.896 }, 00:16:16.896 { 00:16:16.896 "name": "BaseBdev2", 00:16:16.896 "uuid": "03dc56d9-629e-5542-aa0c-e2abc8773c53", 00:16:16.896 "is_configured": true, 00:16:16.896 "data_offset": 256, 00:16:16.896 "data_size": 7936 00:16:16.896 } 00:16:16.896 ] 00:16:16.896 }' 00:16:16.896 23:33:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:16.896 23:33:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:17.464 23:33:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:17.464 23:33:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:17.464 23:33:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:17.464 [2024-09-30 23:33:57.110626] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:17.464 [2024-09-30 23:33:57.110791] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:16:17.464 [2024-09-30 23:33:57.110814] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:16:17.464 [2024-09-30 23:33:57.110856] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:17.464 [2024-09-30 23:33:57.113565] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1a80 00:16:17.464 [2024-09-30 23:33:57.115687] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:17.464 23:33:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:17.464 23:33:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@757 -- # sleep 1 00:16:18.402 23:33:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:18.402 23:33:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:18.402 23:33:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:18.402 23:33:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:18.402 23:33:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:18.402 23:33:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:18.402 23:33:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:18.402 23:33:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.402 23:33:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:18.402 23:33:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.402 23:33:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:18.402 "name": "raid_bdev1", 00:16:18.402 "uuid": "6d2731ec-6e30-4a89-ad16-f58291be5ef3", 00:16:18.402 "strip_size_kb": 0, 00:16:18.402 "state": "online", 00:16:18.402 "raid_level": "raid1", 00:16:18.402 "superblock": true, 00:16:18.402 "num_base_bdevs": 2, 00:16:18.402 "num_base_bdevs_discovered": 2, 00:16:18.402 "num_base_bdevs_operational": 2, 00:16:18.402 "process": { 00:16:18.402 "type": "rebuild", 00:16:18.402 "target": "spare", 00:16:18.402 "progress": { 00:16:18.402 "blocks": 2560, 00:16:18.402 "percent": 32 00:16:18.402 } 00:16:18.402 }, 00:16:18.402 "base_bdevs_list": [ 00:16:18.402 { 00:16:18.402 "name": "spare", 00:16:18.402 "uuid": "743e4c0b-6661-54a8-9cef-0acc11cf5ccc", 00:16:18.402 "is_configured": true, 00:16:18.402 "data_offset": 256, 00:16:18.402 "data_size": 7936 00:16:18.402 }, 00:16:18.402 { 00:16:18.402 "name": "BaseBdev2", 00:16:18.402 "uuid": "03dc56d9-629e-5542-aa0c-e2abc8773c53", 00:16:18.402 "is_configured": true, 00:16:18.402 "data_offset": 256, 00:16:18.402 "data_size": 7936 00:16:18.402 } 00:16:18.402 ] 00:16:18.402 }' 00:16:18.402 23:33:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:18.402 23:33:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:18.403 23:33:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:18.403 23:33:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:18.403 23:33:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:16:18.403 23:33:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.403 23:33:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:18.403 [2024-09-30 23:33:58.255654] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:18.662 [2024-09-30 23:33:58.323472] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:18.662 [2024-09-30 23:33:58.323535] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:18.662 [2024-09-30 23:33:58.323553] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:18.662 [2024-09-30 23:33:58.323560] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:18.662 23:33:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.662 23:33:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:18.662 23:33:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:18.662 23:33:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:18.662 23:33:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:18.662 23:33:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:18.662 23:33:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:18.662 23:33:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:18.662 23:33:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:18.662 23:33:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:18.662 23:33:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:18.662 23:33:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:18.662 23:33:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:18.662 23:33:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.662 23:33:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:18.662 23:33:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.662 23:33:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:18.662 "name": "raid_bdev1", 00:16:18.662 "uuid": "6d2731ec-6e30-4a89-ad16-f58291be5ef3", 00:16:18.662 "strip_size_kb": 0, 00:16:18.662 "state": "online", 00:16:18.662 "raid_level": "raid1", 00:16:18.662 "superblock": true, 00:16:18.662 "num_base_bdevs": 2, 00:16:18.662 "num_base_bdevs_discovered": 1, 00:16:18.662 "num_base_bdevs_operational": 1, 00:16:18.662 "base_bdevs_list": [ 00:16:18.662 { 00:16:18.662 "name": null, 00:16:18.662 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:18.662 "is_configured": false, 00:16:18.662 "data_offset": 0, 00:16:18.662 "data_size": 7936 00:16:18.662 }, 00:16:18.662 { 00:16:18.662 "name": "BaseBdev2", 00:16:18.662 "uuid": "03dc56d9-629e-5542-aa0c-e2abc8773c53", 00:16:18.662 "is_configured": true, 00:16:18.662 "data_offset": 256, 00:16:18.662 "data_size": 7936 00:16:18.662 } 00:16:18.662 ] 00:16:18.662 }' 00:16:18.662 23:33:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:18.662 23:33:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:19.232 23:33:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:19.232 23:33:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:19.232 23:33:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:19.232 [2024-09-30 23:33:58.787528] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:19.232 [2024-09-30 23:33:58.787595] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:19.232 [2024-09-30 23:33:58.787631] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:16:19.232 [2024-09-30 23:33:58.787645] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:19.232 [2024-09-30 23:33:58.787905] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:19.232 [2024-09-30 23:33:58.787927] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:19.232 [2024-09-30 23:33:58.787989] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:16:19.232 [2024-09-30 23:33:58.788004] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:16:19.232 [2024-09-30 23:33:58.788020] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:16:19.232 [2024-09-30 23:33:58.788049] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:19.232 [2024-09-30 23:33:58.790184] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1b50 00:16:19.232 [2024-09-30 23:33:58.792294] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:19.232 spare 00:16:19.232 23:33:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:19.232 23:33:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@764 -- # sleep 1 00:16:20.170 23:33:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:20.171 23:33:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:20.171 23:33:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:20.171 23:33:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:20.171 23:33:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:20.171 23:33:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:20.171 23:33:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:20.171 23:33:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:20.171 23:33:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:20.171 23:33:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:20.171 23:33:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:20.171 "name": "raid_bdev1", 00:16:20.171 "uuid": "6d2731ec-6e30-4a89-ad16-f58291be5ef3", 00:16:20.171 "strip_size_kb": 0, 00:16:20.171 "state": "online", 00:16:20.171 "raid_level": "raid1", 00:16:20.171 "superblock": true, 00:16:20.171 "num_base_bdevs": 2, 00:16:20.171 "num_base_bdevs_discovered": 2, 00:16:20.171 "num_base_bdevs_operational": 2, 00:16:20.171 "process": { 00:16:20.171 "type": "rebuild", 00:16:20.171 "target": "spare", 00:16:20.171 "progress": { 00:16:20.171 "blocks": 2560, 00:16:20.171 "percent": 32 00:16:20.171 } 00:16:20.171 }, 00:16:20.171 "base_bdevs_list": [ 00:16:20.171 { 00:16:20.171 "name": "spare", 00:16:20.171 "uuid": "743e4c0b-6661-54a8-9cef-0acc11cf5ccc", 00:16:20.171 "is_configured": true, 00:16:20.171 "data_offset": 256, 00:16:20.171 "data_size": 7936 00:16:20.171 }, 00:16:20.171 { 00:16:20.171 "name": "BaseBdev2", 00:16:20.171 "uuid": "03dc56d9-629e-5542-aa0c-e2abc8773c53", 00:16:20.171 "is_configured": true, 00:16:20.171 "data_offset": 256, 00:16:20.171 "data_size": 7936 00:16:20.171 } 00:16:20.171 ] 00:16:20.171 }' 00:16:20.171 23:33:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:20.171 23:33:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:20.171 23:33:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:20.171 23:33:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:20.171 23:33:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:16:20.171 23:33:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:20.171 23:33:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:20.171 [2024-09-30 23:33:59.956038] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:20.171 [2024-09-30 23:33:59.999879] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:20.171 [2024-09-30 23:33:59.999940] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:20.171 [2024-09-30 23:33:59.999954] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:20.171 [2024-09-30 23:33:59.999963] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:20.171 23:34:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:20.171 23:34:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:20.171 23:34:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:20.171 23:34:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:20.171 23:34:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:20.171 23:34:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:20.171 23:34:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:20.171 23:34:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:20.171 23:34:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:20.171 23:34:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:20.171 23:34:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:20.171 23:34:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:20.171 23:34:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:20.171 23:34:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:20.171 23:34:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:20.430 23:34:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:20.430 23:34:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:20.430 "name": "raid_bdev1", 00:16:20.430 "uuid": "6d2731ec-6e30-4a89-ad16-f58291be5ef3", 00:16:20.430 "strip_size_kb": 0, 00:16:20.430 "state": "online", 00:16:20.430 "raid_level": "raid1", 00:16:20.430 "superblock": true, 00:16:20.430 "num_base_bdevs": 2, 00:16:20.430 "num_base_bdevs_discovered": 1, 00:16:20.430 "num_base_bdevs_operational": 1, 00:16:20.430 "base_bdevs_list": [ 00:16:20.430 { 00:16:20.430 "name": null, 00:16:20.430 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:20.430 "is_configured": false, 00:16:20.430 "data_offset": 0, 00:16:20.430 "data_size": 7936 00:16:20.430 }, 00:16:20.430 { 00:16:20.430 "name": "BaseBdev2", 00:16:20.430 "uuid": "03dc56d9-629e-5542-aa0c-e2abc8773c53", 00:16:20.430 "is_configured": true, 00:16:20.430 "data_offset": 256, 00:16:20.430 "data_size": 7936 00:16:20.430 } 00:16:20.430 ] 00:16:20.430 }' 00:16:20.430 23:34:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:20.430 23:34:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:20.690 23:34:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:20.690 23:34:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:20.690 23:34:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:20.690 23:34:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:20.690 23:34:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:20.690 23:34:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:20.690 23:34:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:20.690 23:34:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:20.690 23:34:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:20.690 23:34:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:20.690 23:34:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:20.690 "name": "raid_bdev1", 00:16:20.690 "uuid": "6d2731ec-6e30-4a89-ad16-f58291be5ef3", 00:16:20.690 "strip_size_kb": 0, 00:16:20.690 "state": "online", 00:16:20.690 "raid_level": "raid1", 00:16:20.690 "superblock": true, 00:16:20.690 "num_base_bdevs": 2, 00:16:20.690 "num_base_bdevs_discovered": 1, 00:16:20.690 "num_base_bdevs_operational": 1, 00:16:20.690 "base_bdevs_list": [ 00:16:20.690 { 00:16:20.690 "name": null, 00:16:20.690 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:20.690 "is_configured": false, 00:16:20.690 "data_offset": 0, 00:16:20.690 "data_size": 7936 00:16:20.690 }, 00:16:20.690 { 00:16:20.690 "name": "BaseBdev2", 00:16:20.690 "uuid": "03dc56d9-629e-5542-aa0c-e2abc8773c53", 00:16:20.690 "is_configured": true, 00:16:20.690 "data_offset": 256, 00:16:20.690 "data_size": 7936 00:16:20.690 } 00:16:20.690 ] 00:16:20.690 }' 00:16:20.690 23:34:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:20.950 23:34:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:20.950 23:34:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:20.950 23:34:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:20.950 23:34:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:16:20.950 23:34:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:20.950 23:34:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:20.950 23:34:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:20.950 23:34:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:16:20.950 23:34:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:20.950 23:34:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:20.950 [2024-09-30 23:34:00.599427] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:16:20.950 [2024-09-30 23:34:00.599504] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:20.950 [2024-09-30 23:34:00.599525] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:16:20.950 [2024-09-30 23:34:00.599545] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:20.950 [2024-09-30 23:34:00.599767] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:20.950 [2024-09-30 23:34:00.599793] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:20.950 [2024-09-30 23:34:00.599842] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:16:20.950 [2024-09-30 23:34:00.599877] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:16:20.950 [2024-09-30 23:34:00.599886] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:16:20.950 [2024-09-30 23:34:00.599899] bdev_raid.c:3884:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:16:20.950 BaseBdev1 00:16:20.950 23:34:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:20.950 23:34:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@775 -- # sleep 1 00:16:21.887 23:34:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:21.887 23:34:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:21.887 23:34:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:21.887 23:34:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:21.887 23:34:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:21.887 23:34:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:21.887 23:34:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:21.887 23:34:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:21.887 23:34:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:21.887 23:34:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:21.887 23:34:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:21.887 23:34:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:21.887 23:34:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.887 23:34:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:21.887 23:34:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.887 23:34:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:21.887 "name": "raid_bdev1", 00:16:21.887 "uuid": "6d2731ec-6e30-4a89-ad16-f58291be5ef3", 00:16:21.887 "strip_size_kb": 0, 00:16:21.887 "state": "online", 00:16:21.887 "raid_level": "raid1", 00:16:21.887 "superblock": true, 00:16:21.887 "num_base_bdevs": 2, 00:16:21.887 "num_base_bdevs_discovered": 1, 00:16:21.887 "num_base_bdevs_operational": 1, 00:16:21.887 "base_bdevs_list": [ 00:16:21.887 { 00:16:21.887 "name": null, 00:16:21.887 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:21.887 "is_configured": false, 00:16:21.887 "data_offset": 0, 00:16:21.887 "data_size": 7936 00:16:21.887 }, 00:16:21.887 { 00:16:21.887 "name": "BaseBdev2", 00:16:21.887 "uuid": "03dc56d9-629e-5542-aa0c-e2abc8773c53", 00:16:21.887 "is_configured": true, 00:16:21.887 "data_offset": 256, 00:16:21.887 "data_size": 7936 00:16:21.887 } 00:16:21.887 ] 00:16:21.887 }' 00:16:21.887 23:34:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:21.887 23:34:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:22.456 23:34:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:22.456 23:34:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:22.456 23:34:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:22.456 23:34:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:22.456 23:34:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:22.457 23:34:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:22.457 23:34:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.457 23:34:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:22.457 23:34:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:22.457 23:34:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.457 23:34:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:22.457 "name": "raid_bdev1", 00:16:22.457 "uuid": "6d2731ec-6e30-4a89-ad16-f58291be5ef3", 00:16:22.457 "strip_size_kb": 0, 00:16:22.457 "state": "online", 00:16:22.457 "raid_level": "raid1", 00:16:22.457 "superblock": true, 00:16:22.457 "num_base_bdevs": 2, 00:16:22.457 "num_base_bdevs_discovered": 1, 00:16:22.457 "num_base_bdevs_operational": 1, 00:16:22.457 "base_bdevs_list": [ 00:16:22.457 { 00:16:22.457 "name": null, 00:16:22.457 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:22.457 "is_configured": false, 00:16:22.457 "data_offset": 0, 00:16:22.457 "data_size": 7936 00:16:22.457 }, 00:16:22.457 { 00:16:22.457 "name": "BaseBdev2", 00:16:22.457 "uuid": "03dc56d9-629e-5542-aa0c-e2abc8773c53", 00:16:22.457 "is_configured": true, 00:16:22.457 "data_offset": 256, 00:16:22.457 "data_size": 7936 00:16:22.457 } 00:16:22.457 ] 00:16:22.457 }' 00:16:22.457 23:34:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:22.457 23:34:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:22.457 23:34:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:22.457 23:34:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:22.457 23:34:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:22.457 23:34:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@650 -- # local es=0 00:16:22.457 23:34:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:22.457 23:34:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:16:22.457 23:34:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:22.457 23:34:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:16:22.457 23:34:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:22.457 23:34:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:22.457 23:34:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.457 23:34:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:22.457 [2024-09-30 23:34:02.216723] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:22.457 [2024-09-30 23:34:02.216922] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:16:22.457 [2024-09-30 23:34:02.216936] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:16:22.457 request: 00:16:22.457 { 00:16:22.457 "base_bdev": "BaseBdev1", 00:16:22.457 "raid_bdev": "raid_bdev1", 00:16:22.457 "method": "bdev_raid_add_base_bdev", 00:16:22.457 "req_id": 1 00:16:22.457 } 00:16:22.457 Got JSON-RPC error response 00:16:22.457 response: 00:16:22.457 { 00:16:22.457 "code": -22, 00:16:22.457 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:16:22.457 } 00:16:22.457 23:34:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:16:22.457 23:34:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@653 -- # es=1 00:16:22.457 23:34:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:22.457 23:34:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:22.457 23:34:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:22.457 23:34:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@779 -- # sleep 1 00:16:23.395 23:34:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:23.395 23:34:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:23.395 23:34:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:23.395 23:34:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:23.395 23:34:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:23.395 23:34:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:23.395 23:34:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:23.395 23:34:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:23.395 23:34:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:23.395 23:34:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:23.395 23:34:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:23.395 23:34:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:23.395 23:34:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.395 23:34:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:23.654 23:34:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:23.654 23:34:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:23.654 "name": "raid_bdev1", 00:16:23.654 "uuid": "6d2731ec-6e30-4a89-ad16-f58291be5ef3", 00:16:23.654 "strip_size_kb": 0, 00:16:23.654 "state": "online", 00:16:23.654 "raid_level": "raid1", 00:16:23.654 "superblock": true, 00:16:23.655 "num_base_bdevs": 2, 00:16:23.655 "num_base_bdevs_discovered": 1, 00:16:23.655 "num_base_bdevs_operational": 1, 00:16:23.655 "base_bdevs_list": [ 00:16:23.655 { 00:16:23.655 "name": null, 00:16:23.655 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:23.655 "is_configured": false, 00:16:23.655 "data_offset": 0, 00:16:23.655 "data_size": 7936 00:16:23.655 }, 00:16:23.655 { 00:16:23.655 "name": "BaseBdev2", 00:16:23.655 "uuid": "03dc56d9-629e-5542-aa0c-e2abc8773c53", 00:16:23.655 "is_configured": true, 00:16:23.655 "data_offset": 256, 00:16:23.655 "data_size": 7936 00:16:23.655 } 00:16:23.655 ] 00:16:23.655 }' 00:16:23.655 23:34:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:23.655 23:34:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:23.914 23:34:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:23.914 23:34:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:23.914 23:34:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:23.914 23:34:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:23.914 23:34:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:23.914 23:34:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:23.914 23:34:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.914 23:34:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:23.914 23:34:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:23.914 23:34:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:23.914 23:34:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:23.914 "name": "raid_bdev1", 00:16:23.914 "uuid": "6d2731ec-6e30-4a89-ad16-f58291be5ef3", 00:16:23.914 "strip_size_kb": 0, 00:16:23.914 "state": "online", 00:16:23.914 "raid_level": "raid1", 00:16:23.914 "superblock": true, 00:16:23.914 "num_base_bdevs": 2, 00:16:23.914 "num_base_bdevs_discovered": 1, 00:16:23.914 "num_base_bdevs_operational": 1, 00:16:23.914 "base_bdevs_list": [ 00:16:23.914 { 00:16:23.914 "name": null, 00:16:23.914 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:23.914 "is_configured": false, 00:16:23.914 "data_offset": 0, 00:16:23.914 "data_size": 7936 00:16:23.914 }, 00:16:23.914 { 00:16:23.914 "name": "BaseBdev2", 00:16:23.914 "uuid": "03dc56d9-629e-5542-aa0c-e2abc8773c53", 00:16:23.914 "is_configured": true, 00:16:23.914 "data_offset": 256, 00:16:23.914 "data_size": 7936 00:16:23.914 } 00:16:23.915 ] 00:16:23.915 }' 00:16:23.915 23:34:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:23.915 23:34:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:23.915 23:34:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:24.175 23:34:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:24.175 23:34:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@784 -- # killprocess 98154 00:16:24.175 23:34:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@950 -- # '[' -z 98154 ']' 00:16:24.175 23:34:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@954 -- # kill -0 98154 00:16:24.175 23:34:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@955 -- # uname 00:16:24.175 23:34:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:24.175 23:34:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 98154 00:16:24.175 23:34:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:24.175 23:34:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:24.175 killing process with pid 98154 00:16:24.175 23:34:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@968 -- # echo 'killing process with pid 98154' 00:16:24.175 23:34:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@969 -- # kill 98154 00:16:24.175 Received shutdown signal, test time was about 60.000000 seconds 00:16:24.175 00:16:24.175 Latency(us) 00:16:24.175 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:24.175 =================================================================================================================== 00:16:24.175 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:24.175 [2024-09-30 23:34:03.820680] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:24.175 [2024-09-30 23:34:03.820825] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:24.175 23:34:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@974 -- # wait 98154 00:16:24.175 [2024-09-30 23:34:03.820899] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:24.175 [2024-09-30 23:34:03.820910] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state offline 00:16:24.175 [2024-09-30 23:34:03.881389] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:24.436 23:34:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@786 -- # return 0 00:16:24.436 00:16:24.436 real 0m18.407s 00:16:24.436 user 0m24.306s 00:16:24.436 sys 0m2.684s 00:16:24.436 23:34:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:24.436 23:34:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:24.436 ************************************ 00:16:24.436 END TEST raid_rebuild_test_sb_md_separate 00:16:24.436 ************************************ 00:16:24.696 23:34:04 bdev_raid -- bdev/bdev_raid.sh@1010 -- # base_malloc_params='-m 32 -i' 00:16:24.696 23:34:04 bdev_raid -- bdev/bdev_raid.sh@1011 -- # run_test raid_state_function_test_sb_md_interleaved raid_state_function_test raid1 2 true 00:16:24.696 23:34:04 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:16:24.696 23:34:04 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:24.696 23:34:04 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:24.696 ************************************ 00:16:24.696 START TEST raid_state_function_test_sb_md_interleaved 00:16:24.696 ************************************ 00:16:24.696 23:34:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 2 true 00:16:24.696 23:34:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:16:24.696 23:34:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:16:24.696 23:34:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:16:24.696 23:34:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:16:24.696 23:34:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:16:24.696 23:34:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:24.696 23:34:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:16:24.696 23:34:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:24.696 23:34:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:24.696 23:34:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:16:24.696 23:34:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:24.696 23:34:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:24.696 23:34:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:16:24.696 23:34:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:16:24.696 23:34:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:16:24.696 23:34:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # local strip_size 00:16:24.696 23:34:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:16:24.696 23:34:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:16:24.696 23:34:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:16:24.696 23:34:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:16:24.696 23:34:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:16:24.696 23:34:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:16:24.696 23:34:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@229 -- # raid_pid=98831 00:16:24.696 23:34:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:16:24.696 23:34:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 98831' 00:16:24.696 Process raid pid: 98831 00:16:24.696 23:34:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@231 -- # waitforlisten 98831 00:16:24.696 23:34:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@831 -- # '[' -z 98831 ']' 00:16:24.696 23:34:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:24.696 23:34:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:24.696 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:24.696 23:34:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:24.696 23:34:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:24.696 23:34:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:24.696 [2024-09-30 23:34:04.428265] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:16:24.696 [2024-09-30 23:34:04.428399] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:24.957 [2024-09-30 23:34:04.596618] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:24.957 [2024-09-30 23:34:04.672036] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:16:24.957 [2024-09-30 23:34:04.751012] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:24.957 [2024-09-30 23:34:04.751056] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:25.527 23:34:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:25.527 23:34:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@864 -- # return 0 00:16:25.527 23:34:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:16:25.527 23:34:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:25.527 23:34:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:25.527 [2024-09-30 23:34:05.236459] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:25.527 [2024-09-30 23:34:05.236513] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:25.527 [2024-09-30 23:34:05.236526] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:25.527 [2024-09-30 23:34:05.236536] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:25.527 23:34:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:25.527 23:34:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:16:25.527 23:34:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:25.527 23:34:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:25.527 23:34:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:25.527 23:34:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:25.527 23:34:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:25.527 23:34:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:25.527 23:34:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:25.527 23:34:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:25.527 23:34:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:25.527 23:34:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:25.527 23:34:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:25.527 23:34:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:25.527 23:34:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:25.527 23:34:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:25.527 23:34:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:25.527 "name": "Existed_Raid", 00:16:25.527 "uuid": "ea913df2-e633-4a4a-95b9-e04151bb38c6", 00:16:25.527 "strip_size_kb": 0, 00:16:25.527 "state": "configuring", 00:16:25.527 "raid_level": "raid1", 00:16:25.527 "superblock": true, 00:16:25.527 "num_base_bdevs": 2, 00:16:25.527 "num_base_bdevs_discovered": 0, 00:16:25.527 "num_base_bdevs_operational": 2, 00:16:25.527 "base_bdevs_list": [ 00:16:25.527 { 00:16:25.527 "name": "BaseBdev1", 00:16:25.527 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:25.527 "is_configured": false, 00:16:25.527 "data_offset": 0, 00:16:25.527 "data_size": 0 00:16:25.527 }, 00:16:25.527 { 00:16:25.527 "name": "BaseBdev2", 00:16:25.527 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:25.527 "is_configured": false, 00:16:25.527 "data_offset": 0, 00:16:25.527 "data_size": 0 00:16:25.527 } 00:16:25.527 ] 00:16:25.527 }' 00:16:25.527 23:34:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:25.527 23:34:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:26.098 23:34:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:26.098 23:34:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.098 23:34:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:26.098 [2024-09-30 23:34:05.695669] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:26.098 [2024-09-30 23:34:05.695719] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring 00:16:26.098 23:34:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.098 23:34:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:16:26.098 23:34:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.098 23:34:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:26.098 [2024-09-30 23:34:05.707717] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:26.098 [2024-09-30 23:34:05.707757] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:26.098 [2024-09-30 23:34:05.707765] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:26.098 [2024-09-30 23:34:05.707775] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:26.098 23:34:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.098 23:34:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1 00:16:26.098 23:34:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.098 23:34:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:26.098 [2024-09-30 23:34:05.735427] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:26.098 BaseBdev1 00:16:26.098 23:34:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.098 23:34:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:16:26.098 23:34:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:16:26.098 23:34:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:16:26.098 23:34:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@901 -- # local i 00:16:26.098 23:34:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:16:26.098 23:34:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:16:26.098 23:34:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:16:26.098 23:34:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.098 23:34:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:26.098 23:34:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.098 23:34:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:26.098 23:34:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.098 23:34:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:26.098 [ 00:16:26.098 { 00:16:26.098 "name": "BaseBdev1", 00:16:26.098 "aliases": [ 00:16:26.098 "2c83979a-4949-4e9b-8648-7fcac2a2558a" 00:16:26.098 ], 00:16:26.098 "product_name": "Malloc disk", 00:16:26.098 "block_size": 4128, 00:16:26.098 "num_blocks": 8192, 00:16:26.098 "uuid": "2c83979a-4949-4e9b-8648-7fcac2a2558a", 00:16:26.098 "md_size": 32, 00:16:26.098 "md_interleave": true, 00:16:26.098 "dif_type": 0, 00:16:26.098 "assigned_rate_limits": { 00:16:26.098 "rw_ios_per_sec": 0, 00:16:26.098 "rw_mbytes_per_sec": 0, 00:16:26.098 "r_mbytes_per_sec": 0, 00:16:26.098 "w_mbytes_per_sec": 0 00:16:26.098 }, 00:16:26.098 "claimed": true, 00:16:26.098 "claim_type": "exclusive_write", 00:16:26.098 "zoned": false, 00:16:26.098 "supported_io_types": { 00:16:26.098 "read": true, 00:16:26.098 "write": true, 00:16:26.098 "unmap": true, 00:16:26.098 "flush": true, 00:16:26.098 "reset": true, 00:16:26.098 "nvme_admin": false, 00:16:26.098 "nvme_io": false, 00:16:26.098 "nvme_io_md": false, 00:16:26.098 "write_zeroes": true, 00:16:26.098 "zcopy": true, 00:16:26.098 "get_zone_info": false, 00:16:26.098 "zone_management": false, 00:16:26.098 "zone_append": false, 00:16:26.098 "compare": false, 00:16:26.098 "compare_and_write": false, 00:16:26.098 "abort": true, 00:16:26.098 "seek_hole": false, 00:16:26.098 "seek_data": false, 00:16:26.098 "copy": true, 00:16:26.098 "nvme_iov_md": false 00:16:26.098 }, 00:16:26.098 "memory_domains": [ 00:16:26.098 { 00:16:26.098 "dma_device_id": "system", 00:16:26.098 "dma_device_type": 1 00:16:26.098 }, 00:16:26.098 { 00:16:26.098 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:26.098 "dma_device_type": 2 00:16:26.098 } 00:16:26.098 ], 00:16:26.098 "driver_specific": {} 00:16:26.098 } 00:16:26.098 ] 00:16:26.098 23:34:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.098 23:34:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@907 -- # return 0 00:16:26.098 23:34:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:16:26.098 23:34:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:26.098 23:34:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:26.098 23:34:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:26.098 23:34:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:26.098 23:34:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:26.098 23:34:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:26.098 23:34:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:26.098 23:34:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:26.098 23:34:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:26.098 23:34:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:26.098 23:34:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:26.098 23:34:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.098 23:34:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:26.098 23:34:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.098 23:34:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:26.098 "name": "Existed_Raid", 00:16:26.098 "uuid": "05d3337a-5175-4b15-a3b6-3ec40050d28a", 00:16:26.098 "strip_size_kb": 0, 00:16:26.098 "state": "configuring", 00:16:26.098 "raid_level": "raid1", 00:16:26.098 "superblock": true, 00:16:26.098 "num_base_bdevs": 2, 00:16:26.098 "num_base_bdevs_discovered": 1, 00:16:26.098 "num_base_bdevs_operational": 2, 00:16:26.098 "base_bdevs_list": [ 00:16:26.098 { 00:16:26.098 "name": "BaseBdev1", 00:16:26.099 "uuid": "2c83979a-4949-4e9b-8648-7fcac2a2558a", 00:16:26.099 "is_configured": true, 00:16:26.099 "data_offset": 256, 00:16:26.099 "data_size": 7936 00:16:26.099 }, 00:16:26.099 { 00:16:26.099 "name": "BaseBdev2", 00:16:26.099 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:26.099 "is_configured": false, 00:16:26.099 "data_offset": 0, 00:16:26.099 "data_size": 0 00:16:26.099 } 00:16:26.099 ] 00:16:26.099 }' 00:16:26.099 23:34:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:26.099 23:34:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:26.668 23:34:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:26.668 23:34:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.668 23:34:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:26.668 [2024-09-30 23:34:06.254564] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:26.668 [2024-09-30 23:34:06.254615] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring 00:16:26.668 23:34:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.668 23:34:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:16:26.668 23:34:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.668 23:34:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:26.668 [2024-09-30 23:34:06.266628] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:26.668 [2024-09-30 23:34:06.268768] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:26.668 [2024-09-30 23:34:06.268811] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:26.668 23:34:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.668 23:34:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:16:26.668 23:34:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:26.668 23:34:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:16:26.668 23:34:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:26.668 23:34:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:26.668 23:34:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:26.668 23:34:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:26.668 23:34:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:26.668 23:34:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:26.668 23:34:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:26.668 23:34:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:26.668 23:34:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:26.668 23:34:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:26.668 23:34:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:26.668 23:34:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.669 23:34:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:26.669 23:34:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.669 23:34:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:26.669 "name": "Existed_Raid", 00:16:26.669 "uuid": "0fd8203f-0537-4448-bb80-7dee5c97f358", 00:16:26.669 "strip_size_kb": 0, 00:16:26.669 "state": "configuring", 00:16:26.669 "raid_level": "raid1", 00:16:26.669 "superblock": true, 00:16:26.669 "num_base_bdevs": 2, 00:16:26.669 "num_base_bdevs_discovered": 1, 00:16:26.669 "num_base_bdevs_operational": 2, 00:16:26.669 "base_bdevs_list": [ 00:16:26.669 { 00:16:26.669 "name": "BaseBdev1", 00:16:26.669 "uuid": "2c83979a-4949-4e9b-8648-7fcac2a2558a", 00:16:26.669 "is_configured": true, 00:16:26.669 "data_offset": 256, 00:16:26.669 "data_size": 7936 00:16:26.669 }, 00:16:26.669 { 00:16:26.669 "name": "BaseBdev2", 00:16:26.669 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:26.669 "is_configured": false, 00:16:26.669 "data_offset": 0, 00:16:26.669 "data_size": 0 00:16:26.669 } 00:16:26.669 ] 00:16:26.669 }' 00:16:26.669 23:34:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:26.669 23:34:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:26.929 23:34:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2 00:16:26.929 23:34:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.929 23:34:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:26.929 [2024-09-30 23:34:06.727977] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:26.929 [2024-09-30 23:34:06.728397] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:16:26.929 [2024-09-30 23:34:06.728451] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:16:26.929 [2024-09-30 23:34:06.728713] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:16:26.929 BaseBdev2 00:16:26.929 [2024-09-30 23:34:06.728970] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:16:26.929 [2024-09-30 23:34:06.729034] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980 00:16:26.929 [2024-09-30 23:34:06.729184] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:26.929 23:34:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.929 23:34:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:16:26.929 23:34:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:16:26.929 23:34:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:16:26.929 23:34:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@901 -- # local i 00:16:26.929 23:34:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:16:26.929 23:34:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:16:26.929 23:34:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:16:26.929 23:34:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.929 23:34:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:26.929 23:34:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.929 23:34:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:26.929 23:34:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.929 23:34:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:26.929 [ 00:16:26.929 { 00:16:26.929 "name": "BaseBdev2", 00:16:26.929 "aliases": [ 00:16:26.929 "af8003da-97a2-46bb-bd04-214c18d8cc07" 00:16:26.929 ], 00:16:26.929 "product_name": "Malloc disk", 00:16:26.929 "block_size": 4128, 00:16:26.929 "num_blocks": 8192, 00:16:26.929 "uuid": "af8003da-97a2-46bb-bd04-214c18d8cc07", 00:16:26.929 "md_size": 32, 00:16:26.929 "md_interleave": true, 00:16:26.929 "dif_type": 0, 00:16:26.929 "assigned_rate_limits": { 00:16:26.929 "rw_ios_per_sec": 0, 00:16:26.929 "rw_mbytes_per_sec": 0, 00:16:26.929 "r_mbytes_per_sec": 0, 00:16:26.929 "w_mbytes_per_sec": 0 00:16:26.929 }, 00:16:26.929 "claimed": true, 00:16:26.929 "claim_type": "exclusive_write", 00:16:26.929 "zoned": false, 00:16:26.929 "supported_io_types": { 00:16:26.929 "read": true, 00:16:26.929 "write": true, 00:16:26.929 "unmap": true, 00:16:26.929 "flush": true, 00:16:26.929 "reset": true, 00:16:26.929 "nvme_admin": false, 00:16:26.929 "nvme_io": false, 00:16:26.929 "nvme_io_md": false, 00:16:26.929 "write_zeroes": true, 00:16:26.929 "zcopy": true, 00:16:26.929 "get_zone_info": false, 00:16:26.929 "zone_management": false, 00:16:26.929 "zone_append": false, 00:16:26.929 "compare": false, 00:16:26.929 "compare_and_write": false, 00:16:26.929 "abort": true, 00:16:26.929 "seek_hole": false, 00:16:26.929 "seek_data": false, 00:16:26.929 "copy": true, 00:16:26.929 "nvme_iov_md": false 00:16:26.929 }, 00:16:26.929 "memory_domains": [ 00:16:26.929 { 00:16:26.929 "dma_device_id": "system", 00:16:26.929 "dma_device_type": 1 00:16:26.929 }, 00:16:26.929 { 00:16:26.929 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:26.929 "dma_device_type": 2 00:16:26.929 } 00:16:26.929 ], 00:16:26.929 "driver_specific": {} 00:16:26.929 } 00:16:26.929 ] 00:16:26.929 23:34:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.929 23:34:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@907 -- # return 0 00:16:26.929 23:34:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:16:26.929 23:34:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:26.929 23:34:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:16:26.929 23:34:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:26.929 23:34:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:26.929 23:34:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:26.929 23:34:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:26.929 23:34:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:26.929 23:34:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:26.929 23:34:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:26.929 23:34:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:26.929 23:34:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:26.929 23:34:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:26.929 23:34:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:26.929 23:34:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.929 23:34:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:27.189 23:34:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.189 23:34:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:27.189 "name": "Existed_Raid", 00:16:27.189 "uuid": "0fd8203f-0537-4448-bb80-7dee5c97f358", 00:16:27.189 "strip_size_kb": 0, 00:16:27.189 "state": "online", 00:16:27.189 "raid_level": "raid1", 00:16:27.189 "superblock": true, 00:16:27.189 "num_base_bdevs": 2, 00:16:27.189 "num_base_bdevs_discovered": 2, 00:16:27.189 "num_base_bdevs_operational": 2, 00:16:27.189 "base_bdevs_list": [ 00:16:27.189 { 00:16:27.189 "name": "BaseBdev1", 00:16:27.189 "uuid": "2c83979a-4949-4e9b-8648-7fcac2a2558a", 00:16:27.189 "is_configured": true, 00:16:27.189 "data_offset": 256, 00:16:27.189 "data_size": 7936 00:16:27.189 }, 00:16:27.189 { 00:16:27.189 "name": "BaseBdev2", 00:16:27.189 "uuid": "af8003da-97a2-46bb-bd04-214c18d8cc07", 00:16:27.189 "is_configured": true, 00:16:27.189 "data_offset": 256, 00:16:27.189 "data_size": 7936 00:16:27.189 } 00:16:27.189 ] 00:16:27.189 }' 00:16:27.189 23:34:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:27.189 23:34:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:27.449 23:34:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:16:27.449 23:34:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:16:27.449 23:34:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:27.449 23:34:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:27.449 23:34:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:16:27.449 23:34:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:27.449 23:34:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:16:27.449 23:34:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:27.449 23:34:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.449 23:34:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:27.449 [2024-09-30 23:34:07.211459] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:27.449 23:34:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.449 23:34:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:27.449 "name": "Existed_Raid", 00:16:27.449 "aliases": [ 00:16:27.449 "0fd8203f-0537-4448-bb80-7dee5c97f358" 00:16:27.449 ], 00:16:27.449 "product_name": "Raid Volume", 00:16:27.449 "block_size": 4128, 00:16:27.449 "num_blocks": 7936, 00:16:27.449 "uuid": "0fd8203f-0537-4448-bb80-7dee5c97f358", 00:16:27.449 "md_size": 32, 00:16:27.449 "md_interleave": true, 00:16:27.449 "dif_type": 0, 00:16:27.449 "assigned_rate_limits": { 00:16:27.449 "rw_ios_per_sec": 0, 00:16:27.449 "rw_mbytes_per_sec": 0, 00:16:27.449 "r_mbytes_per_sec": 0, 00:16:27.449 "w_mbytes_per_sec": 0 00:16:27.449 }, 00:16:27.449 "claimed": false, 00:16:27.449 "zoned": false, 00:16:27.449 "supported_io_types": { 00:16:27.449 "read": true, 00:16:27.449 "write": true, 00:16:27.449 "unmap": false, 00:16:27.449 "flush": false, 00:16:27.449 "reset": true, 00:16:27.449 "nvme_admin": false, 00:16:27.449 "nvme_io": false, 00:16:27.449 "nvme_io_md": false, 00:16:27.449 "write_zeroes": true, 00:16:27.449 "zcopy": false, 00:16:27.449 "get_zone_info": false, 00:16:27.449 "zone_management": false, 00:16:27.449 "zone_append": false, 00:16:27.449 "compare": false, 00:16:27.449 "compare_and_write": false, 00:16:27.449 "abort": false, 00:16:27.449 "seek_hole": false, 00:16:27.449 "seek_data": false, 00:16:27.449 "copy": false, 00:16:27.449 "nvme_iov_md": false 00:16:27.449 }, 00:16:27.449 "memory_domains": [ 00:16:27.449 { 00:16:27.449 "dma_device_id": "system", 00:16:27.449 "dma_device_type": 1 00:16:27.449 }, 00:16:27.449 { 00:16:27.449 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:27.449 "dma_device_type": 2 00:16:27.449 }, 00:16:27.449 { 00:16:27.449 "dma_device_id": "system", 00:16:27.449 "dma_device_type": 1 00:16:27.449 }, 00:16:27.449 { 00:16:27.449 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:27.449 "dma_device_type": 2 00:16:27.449 } 00:16:27.449 ], 00:16:27.449 "driver_specific": { 00:16:27.449 "raid": { 00:16:27.449 "uuid": "0fd8203f-0537-4448-bb80-7dee5c97f358", 00:16:27.449 "strip_size_kb": 0, 00:16:27.449 "state": "online", 00:16:27.449 "raid_level": "raid1", 00:16:27.449 "superblock": true, 00:16:27.449 "num_base_bdevs": 2, 00:16:27.449 "num_base_bdevs_discovered": 2, 00:16:27.449 "num_base_bdevs_operational": 2, 00:16:27.449 "base_bdevs_list": [ 00:16:27.449 { 00:16:27.449 "name": "BaseBdev1", 00:16:27.449 "uuid": "2c83979a-4949-4e9b-8648-7fcac2a2558a", 00:16:27.449 "is_configured": true, 00:16:27.449 "data_offset": 256, 00:16:27.449 "data_size": 7936 00:16:27.449 }, 00:16:27.449 { 00:16:27.449 "name": "BaseBdev2", 00:16:27.449 "uuid": "af8003da-97a2-46bb-bd04-214c18d8cc07", 00:16:27.449 "is_configured": true, 00:16:27.449 "data_offset": 256, 00:16:27.449 "data_size": 7936 00:16:27.449 } 00:16:27.449 ] 00:16:27.449 } 00:16:27.449 } 00:16:27.449 }' 00:16:27.449 23:34:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:27.449 23:34:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:16:27.449 BaseBdev2' 00:16:27.449 23:34:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:27.710 23:34:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:16:27.710 23:34:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:27.710 23:34:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:16:27.710 23:34:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:27.710 23:34:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.710 23:34:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:27.710 23:34:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.710 23:34:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:16:27.710 23:34:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:16:27.710 23:34:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:27.710 23:34:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:16:27.710 23:34:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.710 23:34:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:27.710 23:34:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:27.710 23:34:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.710 23:34:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:16:27.710 23:34:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:16:27.710 23:34:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:16:27.710 23:34:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.710 23:34:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:27.710 [2024-09-30 23:34:07.422887] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:27.710 23:34:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.710 23:34:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@260 -- # local expected_state 00:16:27.710 23:34:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:16:27.710 23:34:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@198 -- # case $1 in 00:16:27.710 23:34:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@199 -- # return 0 00:16:27.710 23:34:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:16:27.710 23:34:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:16:27.710 23:34:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:27.710 23:34:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:27.710 23:34:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:27.710 23:34:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:27.710 23:34:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:27.710 23:34:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:27.710 23:34:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:27.710 23:34:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:27.710 23:34:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:27.710 23:34:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:27.710 23:34:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.710 23:34:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:27.710 23:34:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:27.710 23:34:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.710 23:34:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:27.710 "name": "Existed_Raid", 00:16:27.710 "uuid": "0fd8203f-0537-4448-bb80-7dee5c97f358", 00:16:27.710 "strip_size_kb": 0, 00:16:27.710 "state": "online", 00:16:27.710 "raid_level": "raid1", 00:16:27.710 "superblock": true, 00:16:27.710 "num_base_bdevs": 2, 00:16:27.710 "num_base_bdevs_discovered": 1, 00:16:27.710 "num_base_bdevs_operational": 1, 00:16:27.710 "base_bdevs_list": [ 00:16:27.710 { 00:16:27.710 "name": null, 00:16:27.710 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:27.710 "is_configured": false, 00:16:27.710 "data_offset": 0, 00:16:27.711 "data_size": 7936 00:16:27.711 }, 00:16:27.711 { 00:16:27.711 "name": "BaseBdev2", 00:16:27.711 "uuid": "af8003da-97a2-46bb-bd04-214c18d8cc07", 00:16:27.711 "is_configured": true, 00:16:27.711 "data_offset": 256, 00:16:27.711 "data_size": 7936 00:16:27.711 } 00:16:27.711 ] 00:16:27.711 }' 00:16:27.711 23:34:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:27.711 23:34:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:28.280 23:34:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:16:28.281 23:34:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:28.281 23:34:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:28.281 23:34:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.281 23:34:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:28.281 23:34:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:28.281 23:34:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:28.281 23:34:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:28.281 23:34:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:28.281 23:34:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:16:28.281 23:34:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.281 23:34:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:28.281 [2024-09-30 23:34:07.947366] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:28.281 [2024-09-30 23:34:07.947472] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:28.281 [2024-09-30 23:34:07.969195] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:28.281 [2024-09-30 23:34:07.969246] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:28.281 [2024-09-30 23:34:07.969259] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline 00:16:28.281 23:34:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:28.281 23:34:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:28.281 23:34:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:28.281 23:34:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:28.281 23:34:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:16:28.281 23:34:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.281 23:34:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:28.281 23:34:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:28.281 23:34:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:16:28.281 23:34:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:16:28.281 23:34:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:16:28.281 23:34:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@326 -- # killprocess 98831 00:16:28.281 23:34:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@950 -- # '[' -z 98831 ']' 00:16:28.281 23:34:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@954 -- # kill -0 98831 00:16:28.281 23:34:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@955 -- # uname 00:16:28.281 23:34:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:28.281 23:34:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 98831 00:16:28.281 23:34:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:28.281 23:34:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:28.281 killing process with pid 98831 00:16:28.281 23:34:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@968 -- # echo 'killing process with pid 98831' 00:16:28.281 23:34:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@969 -- # kill 98831 00:16:28.281 [2024-09-30 23:34:08.066597] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:28.281 23:34:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@974 -- # wait 98831 00:16:28.281 [2024-09-30 23:34:08.068173] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:28.851 23:34:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@328 -- # return 0 00:16:28.851 00:16:28.851 real 0m4.127s 00:16:28.851 user 0m6.299s 00:16:28.851 sys 0m0.935s 00:16:28.851 23:34:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:28.851 23:34:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:28.851 ************************************ 00:16:28.851 END TEST raid_state_function_test_sb_md_interleaved 00:16:28.851 ************************************ 00:16:28.851 23:34:08 bdev_raid -- bdev/bdev_raid.sh@1012 -- # run_test raid_superblock_test_md_interleaved raid_superblock_test raid1 2 00:16:28.851 23:34:08 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:16:28.851 23:34:08 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:28.851 23:34:08 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:28.851 ************************************ 00:16:28.851 START TEST raid_superblock_test_md_interleaved 00:16:28.851 ************************************ 00:16:28.851 23:34:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1125 -- # raid_superblock_test raid1 2 00:16:28.851 23:34:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:16:28.851 23:34:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:16:28.851 23:34:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:16:28.851 23:34:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:16:28.851 23:34:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:16:28.851 23:34:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:16:28.851 23:34:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:16:28.851 23:34:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:16:28.851 23:34:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:16:28.851 23:34:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@399 -- # local strip_size 00:16:28.851 23:34:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:16:28.851 23:34:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:16:28.851 23:34:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:16:28.851 23:34:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:16:28.851 23:34:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:16:28.851 23:34:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@412 -- # raid_pid=99072 00:16:28.851 23:34:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@413 -- # waitforlisten 99072 00:16:28.851 23:34:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:16:28.851 23:34:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@831 -- # '[' -z 99072 ']' 00:16:28.851 23:34:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:28.851 23:34:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:28.851 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:28.851 23:34:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:28.851 23:34:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:28.851 23:34:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:28.851 [2024-09-30 23:34:08.628424] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:16:28.852 [2024-09-30 23:34:08.628579] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid99072 ] 00:16:29.111 [2024-09-30 23:34:08.794214] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:29.111 [2024-09-30 23:34:08.867350] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:16:29.111 [2024-09-30 23:34:08.943832] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:29.111 [2024-09-30 23:34:08.943886] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:29.685 23:34:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:29.685 23:34:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@864 -- # return 0 00:16:29.685 23:34:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:16:29.685 23:34:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:29.685 23:34:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:16:29.685 23:34:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:16:29.685 23:34:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:16:29.685 23:34:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:29.685 23:34:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:16:29.685 23:34:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:29.685 23:34:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b malloc1 00:16:29.685 23:34:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:29.685 23:34:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:29.685 malloc1 00:16:29.685 23:34:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:29.685 23:34:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:29.685 23:34:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:29.685 23:34:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:29.685 [2024-09-30 23:34:09.491171] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:29.685 [2024-09-30 23:34:09.491304] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:29.685 [2024-09-30 23:34:09.491358] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:16:29.685 [2024-09-30 23:34:09.491401] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:29.685 [2024-09-30 23:34:09.493631] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:29.685 [2024-09-30 23:34:09.493705] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:29.685 pt1 00:16:29.685 23:34:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:29.685 23:34:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:16:29.685 23:34:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:29.685 23:34:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:16:29.685 23:34:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:16:29.685 23:34:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:16:29.685 23:34:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:29.685 23:34:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:16:29.685 23:34:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:29.685 23:34:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b malloc2 00:16:29.685 23:34:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:29.685 23:34:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:29.945 malloc2 00:16:29.945 23:34:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:29.945 23:34:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:29.945 23:34:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:29.945 23:34:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:29.945 [2024-09-30 23:34:09.547006] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:29.945 [2024-09-30 23:34:09.547204] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:29.945 [2024-09-30 23:34:09.547283] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:16:29.945 [2024-09-30 23:34:09.547360] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:29.945 [2024-09-30 23:34:09.551763] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:29.945 [2024-09-30 23:34:09.551934] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:29.945 pt2 00:16:29.945 23:34:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:29.945 23:34:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:16:29.945 23:34:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:29.945 23:34:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:16:29.945 23:34:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:29.945 23:34:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:29.945 [2024-09-30 23:34:09.560269] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:29.945 [2024-09-30 23:34:09.563505] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:29.945 [2024-09-30 23:34:09.563803] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:16:29.945 [2024-09-30 23:34:09.563905] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:16:29.945 [2024-09-30 23:34:09.564085] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:16:29.945 [2024-09-30 23:34:09.564238] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:16:29.945 [2024-09-30 23:34:09.564308] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:16:29.945 [2024-09-30 23:34:09.564519] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:29.945 23:34:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:29.945 23:34:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:29.945 23:34:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:29.945 23:34:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:29.945 23:34:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:29.945 23:34:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:29.945 23:34:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:29.945 23:34:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:29.945 23:34:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:29.945 23:34:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:29.945 23:34:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:29.945 23:34:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:29.945 23:34:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:29.945 23:34:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:29.945 23:34:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:29.945 23:34:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:29.945 23:34:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:29.945 "name": "raid_bdev1", 00:16:29.945 "uuid": "970972f4-7cbc-433a-a294-49fbc83a0658", 00:16:29.945 "strip_size_kb": 0, 00:16:29.945 "state": "online", 00:16:29.945 "raid_level": "raid1", 00:16:29.945 "superblock": true, 00:16:29.945 "num_base_bdevs": 2, 00:16:29.945 "num_base_bdevs_discovered": 2, 00:16:29.945 "num_base_bdevs_operational": 2, 00:16:29.945 "base_bdevs_list": [ 00:16:29.945 { 00:16:29.945 "name": "pt1", 00:16:29.945 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:29.945 "is_configured": true, 00:16:29.945 "data_offset": 256, 00:16:29.945 "data_size": 7936 00:16:29.945 }, 00:16:29.945 { 00:16:29.945 "name": "pt2", 00:16:29.945 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:29.945 "is_configured": true, 00:16:29.945 "data_offset": 256, 00:16:29.945 "data_size": 7936 00:16:29.945 } 00:16:29.945 ] 00:16:29.945 }' 00:16:29.945 23:34:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:29.945 23:34:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:30.205 23:34:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:16:30.205 23:34:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:16:30.205 23:34:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:30.205 23:34:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:30.206 23:34:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:16:30.206 23:34:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:30.206 23:34:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:30.206 23:34:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:30.206 23:34:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.206 23:34:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:30.206 [2024-09-30 23:34:10.004098] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:30.206 23:34:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.206 23:34:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:30.206 "name": "raid_bdev1", 00:16:30.206 "aliases": [ 00:16:30.206 "970972f4-7cbc-433a-a294-49fbc83a0658" 00:16:30.206 ], 00:16:30.206 "product_name": "Raid Volume", 00:16:30.206 "block_size": 4128, 00:16:30.206 "num_blocks": 7936, 00:16:30.206 "uuid": "970972f4-7cbc-433a-a294-49fbc83a0658", 00:16:30.206 "md_size": 32, 00:16:30.206 "md_interleave": true, 00:16:30.206 "dif_type": 0, 00:16:30.206 "assigned_rate_limits": { 00:16:30.206 "rw_ios_per_sec": 0, 00:16:30.206 "rw_mbytes_per_sec": 0, 00:16:30.206 "r_mbytes_per_sec": 0, 00:16:30.206 "w_mbytes_per_sec": 0 00:16:30.206 }, 00:16:30.206 "claimed": false, 00:16:30.206 "zoned": false, 00:16:30.206 "supported_io_types": { 00:16:30.206 "read": true, 00:16:30.206 "write": true, 00:16:30.206 "unmap": false, 00:16:30.206 "flush": false, 00:16:30.206 "reset": true, 00:16:30.206 "nvme_admin": false, 00:16:30.206 "nvme_io": false, 00:16:30.206 "nvme_io_md": false, 00:16:30.206 "write_zeroes": true, 00:16:30.206 "zcopy": false, 00:16:30.206 "get_zone_info": false, 00:16:30.206 "zone_management": false, 00:16:30.206 "zone_append": false, 00:16:30.206 "compare": false, 00:16:30.206 "compare_and_write": false, 00:16:30.206 "abort": false, 00:16:30.206 "seek_hole": false, 00:16:30.206 "seek_data": false, 00:16:30.206 "copy": false, 00:16:30.206 "nvme_iov_md": false 00:16:30.206 }, 00:16:30.206 "memory_domains": [ 00:16:30.206 { 00:16:30.206 "dma_device_id": "system", 00:16:30.206 "dma_device_type": 1 00:16:30.206 }, 00:16:30.206 { 00:16:30.206 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:30.206 "dma_device_type": 2 00:16:30.206 }, 00:16:30.206 { 00:16:30.206 "dma_device_id": "system", 00:16:30.206 "dma_device_type": 1 00:16:30.206 }, 00:16:30.206 { 00:16:30.206 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:30.206 "dma_device_type": 2 00:16:30.206 } 00:16:30.206 ], 00:16:30.206 "driver_specific": { 00:16:30.206 "raid": { 00:16:30.206 "uuid": "970972f4-7cbc-433a-a294-49fbc83a0658", 00:16:30.206 "strip_size_kb": 0, 00:16:30.206 "state": "online", 00:16:30.206 "raid_level": "raid1", 00:16:30.206 "superblock": true, 00:16:30.206 "num_base_bdevs": 2, 00:16:30.206 "num_base_bdevs_discovered": 2, 00:16:30.206 "num_base_bdevs_operational": 2, 00:16:30.206 "base_bdevs_list": [ 00:16:30.206 { 00:16:30.206 "name": "pt1", 00:16:30.206 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:30.206 "is_configured": true, 00:16:30.206 "data_offset": 256, 00:16:30.206 "data_size": 7936 00:16:30.206 }, 00:16:30.206 { 00:16:30.206 "name": "pt2", 00:16:30.206 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:30.206 "is_configured": true, 00:16:30.206 "data_offset": 256, 00:16:30.206 "data_size": 7936 00:16:30.206 } 00:16:30.206 ] 00:16:30.206 } 00:16:30.206 } 00:16:30.206 }' 00:16:30.206 23:34:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:30.467 23:34:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:16:30.467 pt2' 00:16:30.467 23:34:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:30.467 23:34:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:16:30.467 23:34:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:30.467 23:34:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:16:30.467 23:34:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.467 23:34:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:30.467 23:34:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:30.467 23:34:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.467 23:34:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:16:30.467 23:34:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:16:30.467 23:34:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:30.467 23:34:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:16:30.467 23:34:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:30.467 23:34:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.467 23:34:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:30.467 23:34:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.467 23:34:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:16:30.467 23:34:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:16:30.467 23:34:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:30.467 23:34:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:16:30.467 23:34:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.467 23:34:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:30.467 [2024-09-30 23:34:10.199683] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:30.467 23:34:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.467 23:34:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=970972f4-7cbc-433a-a294-49fbc83a0658 00:16:30.467 23:34:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@436 -- # '[' -z 970972f4-7cbc-433a-a294-49fbc83a0658 ']' 00:16:30.467 23:34:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:30.467 23:34:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.467 23:34:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:30.467 [2024-09-30 23:34:10.243382] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:30.467 [2024-09-30 23:34:10.243447] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:30.467 [2024-09-30 23:34:10.243545] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:30.467 [2024-09-30 23:34:10.243656] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:30.467 [2024-09-30 23:34:10.243688] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:16:30.467 23:34:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.467 23:34:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:16:30.467 23:34:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:30.467 23:34:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.467 23:34:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:30.467 23:34:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.467 23:34:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:16:30.467 23:34:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:16:30.467 23:34:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:16:30.467 23:34:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:16:30.467 23:34:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.467 23:34:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:30.467 23:34:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.467 23:34:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:16:30.467 23:34:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:16:30.467 23:34:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.467 23:34:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:30.467 23:34:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.467 23:34:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:16:30.467 23:34:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:16:30.467 23:34:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.467 23:34:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:30.727 23:34:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.727 23:34:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:16:30.727 23:34:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:16:30.727 23:34:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@650 -- # local es=0 00:16:30.727 23:34:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:16:30.727 23:34:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:16:30.727 23:34:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:30.727 23:34:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:16:30.727 23:34:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:30.728 23:34:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:16:30.728 23:34:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.728 23:34:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:30.728 [2024-09-30 23:34:10.371174] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:16:30.728 [2024-09-30 23:34:10.373304] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:16:30.728 [2024-09-30 23:34:10.373399] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:16:30.728 [2024-09-30 23:34:10.373482] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:16:30.728 [2024-09-30 23:34:10.373526] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:30.728 [2024-09-30 23:34:10.373546] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state configuring 00:16:30.728 request: 00:16:30.728 { 00:16:30.728 "name": "raid_bdev1", 00:16:30.728 "raid_level": "raid1", 00:16:30.728 "base_bdevs": [ 00:16:30.728 "malloc1", 00:16:30.728 "malloc2" 00:16:30.728 ], 00:16:30.728 "superblock": false, 00:16:30.728 "method": "bdev_raid_create", 00:16:30.728 "req_id": 1 00:16:30.728 } 00:16:30.728 Got JSON-RPC error response 00:16:30.728 response: 00:16:30.728 { 00:16:30.728 "code": -17, 00:16:30.728 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:16:30.728 } 00:16:30.728 23:34:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:16:30.728 23:34:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@653 -- # es=1 00:16:30.728 23:34:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:30.728 23:34:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:30.728 23:34:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:30.728 23:34:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:30.728 23:34:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.728 23:34:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:30.728 23:34:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:16:30.728 23:34:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.728 23:34:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:16:30.728 23:34:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:16:30.728 23:34:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:30.728 23:34:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.728 23:34:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:30.728 [2024-09-30 23:34:10.431031] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:30.728 [2024-09-30 23:34:10.431111] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:30.728 [2024-09-30 23:34:10.431149] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:16:30.728 [2024-09-30 23:34:10.431183] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:30.728 [2024-09-30 23:34:10.433274] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:30.728 [2024-09-30 23:34:10.433339] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:30.728 [2024-09-30 23:34:10.433398] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:16:30.728 [2024-09-30 23:34:10.433457] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:30.728 pt1 00:16:30.728 23:34:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.728 23:34:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:16:30.728 23:34:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:30.728 23:34:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:30.728 23:34:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:30.728 23:34:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:30.728 23:34:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:30.728 23:34:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:30.728 23:34:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:30.728 23:34:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:30.728 23:34:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:30.728 23:34:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:30.728 23:34:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:30.728 23:34:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.728 23:34:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:30.728 23:34:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.728 23:34:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:30.728 "name": "raid_bdev1", 00:16:30.728 "uuid": "970972f4-7cbc-433a-a294-49fbc83a0658", 00:16:30.728 "strip_size_kb": 0, 00:16:30.728 "state": "configuring", 00:16:30.728 "raid_level": "raid1", 00:16:30.728 "superblock": true, 00:16:30.728 "num_base_bdevs": 2, 00:16:30.728 "num_base_bdevs_discovered": 1, 00:16:30.728 "num_base_bdevs_operational": 2, 00:16:30.728 "base_bdevs_list": [ 00:16:30.728 { 00:16:30.728 "name": "pt1", 00:16:30.728 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:30.728 "is_configured": true, 00:16:30.728 "data_offset": 256, 00:16:30.728 "data_size": 7936 00:16:30.728 }, 00:16:30.728 { 00:16:30.728 "name": null, 00:16:30.728 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:30.728 "is_configured": false, 00:16:30.728 "data_offset": 256, 00:16:30.728 "data_size": 7936 00:16:30.728 } 00:16:30.728 ] 00:16:30.728 }' 00:16:30.728 23:34:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:30.728 23:34:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:31.298 23:34:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:16:31.298 23:34:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:16:31.298 23:34:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:16:31.298 23:34:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:31.298 23:34:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.298 23:34:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:31.298 [2024-09-30 23:34:10.866312] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:31.298 [2024-09-30 23:34:10.866403] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:31.298 [2024-09-30 23:34:10.866439] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:16:31.298 [2024-09-30 23:34:10.866467] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:31.298 [2024-09-30 23:34:10.866613] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:31.298 [2024-09-30 23:34:10.866666] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:31.298 [2024-09-30 23:34:10.866721] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:16:31.298 [2024-09-30 23:34:10.866772] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:31.298 [2024-09-30 23:34:10.866897] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:16:31.298 [2024-09-30 23:34:10.866938] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:16:31.298 [2024-09-30 23:34:10.867035] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:16:31.298 [2024-09-30 23:34:10.867116] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:16:31.298 [2024-09-30 23:34:10.867154] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006980 00:16:31.298 [2024-09-30 23:34:10.867247] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:31.298 pt2 00:16:31.298 23:34:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.298 23:34:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:16:31.298 23:34:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:16:31.298 23:34:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:31.298 23:34:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:31.298 23:34:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:31.298 23:34:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:31.298 23:34:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:31.298 23:34:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:31.298 23:34:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:31.298 23:34:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:31.298 23:34:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:31.298 23:34:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:31.298 23:34:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:31.298 23:34:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:31.298 23:34:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.298 23:34:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:31.298 23:34:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.298 23:34:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:31.298 "name": "raid_bdev1", 00:16:31.298 "uuid": "970972f4-7cbc-433a-a294-49fbc83a0658", 00:16:31.298 "strip_size_kb": 0, 00:16:31.298 "state": "online", 00:16:31.298 "raid_level": "raid1", 00:16:31.298 "superblock": true, 00:16:31.298 "num_base_bdevs": 2, 00:16:31.298 "num_base_bdevs_discovered": 2, 00:16:31.298 "num_base_bdevs_operational": 2, 00:16:31.298 "base_bdevs_list": [ 00:16:31.298 { 00:16:31.298 "name": "pt1", 00:16:31.298 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:31.298 "is_configured": true, 00:16:31.298 "data_offset": 256, 00:16:31.298 "data_size": 7936 00:16:31.298 }, 00:16:31.298 { 00:16:31.298 "name": "pt2", 00:16:31.298 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:31.298 "is_configured": true, 00:16:31.298 "data_offset": 256, 00:16:31.298 "data_size": 7936 00:16:31.298 } 00:16:31.298 ] 00:16:31.298 }' 00:16:31.298 23:34:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:31.298 23:34:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:31.558 23:34:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:16:31.558 23:34:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:16:31.558 23:34:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:31.558 23:34:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:31.558 23:34:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:16:31.558 23:34:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:31.558 23:34:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:31.558 23:34:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:31.558 23:34:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.558 23:34:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:31.558 [2024-09-30 23:34:11.337745] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:31.558 23:34:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.558 23:34:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:31.558 "name": "raid_bdev1", 00:16:31.558 "aliases": [ 00:16:31.558 "970972f4-7cbc-433a-a294-49fbc83a0658" 00:16:31.558 ], 00:16:31.558 "product_name": "Raid Volume", 00:16:31.558 "block_size": 4128, 00:16:31.558 "num_blocks": 7936, 00:16:31.558 "uuid": "970972f4-7cbc-433a-a294-49fbc83a0658", 00:16:31.558 "md_size": 32, 00:16:31.558 "md_interleave": true, 00:16:31.558 "dif_type": 0, 00:16:31.558 "assigned_rate_limits": { 00:16:31.558 "rw_ios_per_sec": 0, 00:16:31.558 "rw_mbytes_per_sec": 0, 00:16:31.558 "r_mbytes_per_sec": 0, 00:16:31.558 "w_mbytes_per_sec": 0 00:16:31.558 }, 00:16:31.558 "claimed": false, 00:16:31.558 "zoned": false, 00:16:31.558 "supported_io_types": { 00:16:31.558 "read": true, 00:16:31.558 "write": true, 00:16:31.558 "unmap": false, 00:16:31.558 "flush": false, 00:16:31.558 "reset": true, 00:16:31.558 "nvme_admin": false, 00:16:31.558 "nvme_io": false, 00:16:31.558 "nvme_io_md": false, 00:16:31.558 "write_zeroes": true, 00:16:31.558 "zcopy": false, 00:16:31.558 "get_zone_info": false, 00:16:31.558 "zone_management": false, 00:16:31.558 "zone_append": false, 00:16:31.558 "compare": false, 00:16:31.558 "compare_and_write": false, 00:16:31.558 "abort": false, 00:16:31.558 "seek_hole": false, 00:16:31.558 "seek_data": false, 00:16:31.558 "copy": false, 00:16:31.558 "nvme_iov_md": false 00:16:31.558 }, 00:16:31.558 "memory_domains": [ 00:16:31.558 { 00:16:31.558 "dma_device_id": "system", 00:16:31.558 "dma_device_type": 1 00:16:31.558 }, 00:16:31.558 { 00:16:31.558 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:31.558 "dma_device_type": 2 00:16:31.558 }, 00:16:31.558 { 00:16:31.558 "dma_device_id": "system", 00:16:31.558 "dma_device_type": 1 00:16:31.558 }, 00:16:31.558 { 00:16:31.558 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:31.558 "dma_device_type": 2 00:16:31.558 } 00:16:31.558 ], 00:16:31.558 "driver_specific": { 00:16:31.558 "raid": { 00:16:31.558 "uuid": "970972f4-7cbc-433a-a294-49fbc83a0658", 00:16:31.558 "strip_size_kb": 0, 00:16:31.558 "state": "online", 00:16:31.559 "raid_level": "raid1", 00:16:31.559 "superblock": true, 00:16:31.559 "num_base_bdevs": 2, 00:16:31.559 "num_base_bdevs_discovered": 2, 00:16:31.559 "num_base_bdevs_operational": 2, 00:16:31.559 "base_bdevs_list": [ 00:16:31.559 { 00:16:31.559 "name": "pt1", 00:16:31.559 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:31.559 "is_configured": true, 00:16:31.559 "data_offset": 256, 00:16:31.559 "data_size": 7936 00:16:31.559 }, 00:16:31.559 { 00:16:31.559 "name": "pt2", 00:16:31.559 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:31.559 "is_configured": true, 00:16:31.559 "data_offset": 256, 00:16:31.559 "data_size": 7936 00:16:31.559 } 00:16:31.559 ] 00:16:31.559 } 00:16:31.559 } 00:16:31.559 }' 00:16:31.559 23:34:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:31.818 23:34:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:16:31.818 pt2' 00:16:31.818 23:34:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:31.818 23:34:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:16:31.818 23:34:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:31.818 23:34:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:31.818 23:34:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:16:31.818 23:34:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.818 23:34:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:31.819 23:34:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.819 23:34:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:16:31.819 23:34:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:16:31.819 23:34:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:31.819 23:34:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:16:31.819 23:34:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.819 23:34:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:31.819 23:34:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:31.819 23:34:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.819 23:34:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:16:31.819 23:34:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:16:31.819 23:34:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:16:31.819 23:34:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:31.819 23:34:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.819 23:34:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:31.819 [2024-09-30 23:34:11.601283] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:31.819 23:34:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.819 23:34:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # '[' 970972f4-7cbc-433a-a294-49fbc83a0658 '!=' 970972f4-7cbc-433a-a294-49fbc83a0658 ']' 00:16:31.819 23:34:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:16:31.819 23:34:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@198 -- # case $1 in 00:16:31.819 23:34:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@199 -- # return 0 00:16:31.819 23:34:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:16:31.819 23:34:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.819 23:34:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:31.819 [2024-09-30 23:34:11.629026] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:16:31.819 23:34:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.819 23:34:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:31.819 23:34:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:31.819 23:34:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:31.819 23:34:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:31.819 23:34:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:31.819 23:34:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:31.819 23:34:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:31.819 23:34:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:31.819 23:34:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:31.819 23:34:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:31.819 23:34:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:31.819 23:34:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.819 23:34:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:31.819 23:34:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:31.819 23:34:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:32.079 23:34:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:32.079 "name": "raid_bdev1", 00:16:32.079 "uuid": "970972f4-7cbc-433a-a294-49fbc83a0658", 00:16:32.079 "strip_size_kb": 0, 00:16:32.079 "state": "online", 00:16:32.079 "raid_level": "raid1", 00:16:32.079 "superblock": true, 00:16:32.079 "num_base_bdevs": 2, 00:16:32.079 "num_base_bdevs_discovered": 1, 00:16:32.079 "num_base_bdevs_operational": 1, 00:16:32.079 "base_bdevs_list": [ 00:16:32.079 { 00:16:32.079 "name": null, 00:16:32.079 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:32.079 "is_configured": false, 00:16:32.079 "data_offset": 0, 00:16:32.079 "data_size": 7936 00:16:32.079 }, 00:16:32.079 { 00:16:32.079 "name": "pt2", 00:16:32.079 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:32.079 "is_configured": true, 00:16:32.079 "data_offset": 256, 00:16:32.079 "data_size": 7936 00:16:32.079 } 00:16:32.079 ] 00:16:32.079 }' 00:16:32.079 23:34:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:32.079 23:34:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:32.339 23:34:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:32.339 23:34:12 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:32.339 23:34:12 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:32.339 [2024-09-30 23:34:12.104153] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:32.339 [2024-09-30 23:34:12.104221] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:32.339 [2024-09-30 23:34:12.104300] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:32.339 [2024-09-30 23:34:12.104355] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:32.339 [2024-09-30 23:34:12.104415] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name raid_bdev1, state offline 00:16:32.339 23:34:12 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:32.339 23:34:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:32.339 23:34:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:16:32.339 23:34:12 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:32.339 23:34:12 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:32.339 23:34:12 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:32.339 23:34:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:16:32.339 23:34:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:16:32.339 23:34:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:16:32.339 23:34:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:16:32.339 23:34:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:16:32.339 23:34:12 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:32.339 23:34:12 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:32.339 23:34:12 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:32.339 23:34:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:16:32.339 23:34:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:16:32.339 23:34:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:16:32.339 23:34:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:16:32.339 23:34:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@519 -- # i=1 00:16:32.339 23:34:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:32.339 23:34:12 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:32.339 23:34:12 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:32.339 [2024-09-30 23:34:12.160068] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:32.339 [2024-09-30 23:34:12.160151] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:32.339 [2024-09-30 23:34:12.160182] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:16:32.339 [2024-09-30 23:34:12.160205] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:32.339 [2024-09-30 23:34:12.162286] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:32.339 [2024-09-30 23:34:12.162352] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:32.339 [2024-09-30 23:34:12.162418] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:16:32.339 [2024-09-30 23:34:12.162453] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:32.339 [2024-09-30 23:34:12.162508] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:16:32.339 [2024-09-30 23:34:12.162516] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:16:32.339 [2024-09-30 23:34:12.162601] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:16:32.339 [2024-09-30 23:34:12.162656] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:16:32.339 [2024-09-30 23:34:12.162665] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006d00 00:16:32.339 [2024-09-30 23:34:12.162714] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:32.339 pt2 00:16:32.339 23:34:12 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:32.339 23:34:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:32.339 23:34:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:32.339 23:34:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:32.339 23:34:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:32.339 23:34:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:32.339 23:34:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:32.339 23:34:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:32.339 23:34:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:32.339 23:34:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:32.339 23:34:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:32.339 23:34:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:32.339 23:34:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:32.339 23:34:12 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:32.339 23:34:12 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:32.339 23:34:12 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:32.599 23:34:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:32.599 "name": "raid_bdev1", 00:16:32.599 "uuid": "970972f4-7cbc-433a-a294-49fbc83a0658", 00:16:32.599 "strip_size_kb": 0, 00:16:32.599 "state": "online", 00:16:32.599 "raid_level": "raid1", 00:16:32.599 "superblock": true, 00:16:32.599 "num_base_bdevs": 2, 00:16:32.599 "num_base_bdevs_discovered": 1, 00:16:32.599 "num_base_bdevs_operational": 1, 00:16:32.599 "base_bdevs_list": [ 00:16:32.599 { 00:16:32.599 "name": null, 00:16:32.599 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:32.599 "is_configured": false, 00:16:32.599 "data_offset": 256, 00:16:32.599 "data_size": 7936 00:16:32.599 }, 00:16:32.599 { 00:16:32.599 "name": "pt2", 00:16:32.599 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:32.599 "is_configured": true, 00:16:32.599 "data_offset": 256, 00:16:32.599 "data_size": 7936 00:16:32.599 } 00:16:32.599 ] 00:16:32.599 }' 00:16:32.599 23:34:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:32.599 23:34:12 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:32.859 23:34:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:32.859 23:34:12 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:32.859 23:34:12 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:32.859 [2024-09-30 23:34:12.651256] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:32.859 [2024-09-30 23:34:12.651320] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:32.859 [2024-09-30 23:34:12.651386] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:32.859 [2024-09-30 23:34:12.651438] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:32.859 [2024-09-30 23:34:12.651471] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name raid_bdev1, state offline 00:16:32.859 23:34:12 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:32.859 23:34:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:32.859 23:34:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:16:32.859 23:34:12 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:32.859 23:34:12 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:32.859 23:34:12 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:32.859 23:34:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:16:32.859 23:34:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:16:32.859 23:34:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:16:32.859 23:34:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:32.859 23:34:12 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:32.859 23:34:12 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:33.118 [2024-09-30 23:34:12.715149] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:33.119 [2024-09-30 23:34:12.715238] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:33.119 [2024-09-30 23:34:12.715272] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:16:33.119 [2024-09-30 23:34:12.715305] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:33.119 [2024-09-30 23:34:12.717317] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:33.119 [2024-09-30 23:34:12.717379] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:33.119 [2024-09-30 23:34:12.717441] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:16:33.119 [2024-09-30 23:34:12.717493] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:33.119 [2024-09-30 23:34:12.717596] bdev_raid.c:3675:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:16:33.119 [2024-09-30 23:34:12.717637] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:33.119 [2024-09-30 23:34:12.717665] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007080 name raid_bdev1, state configuring 00:16:33.119 [2024-09-30 23:34:12.717756] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:33.119 [2024-09-30 23:34:12.717849] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007400 00:16:33.119 [2024-09-30 23:34:12.717901] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:16:33.119 [2024-09-30 23:34:12.717979] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:16:33.119 [2024-09-30 23:34:12.718066] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007400 00:16:33.119 [2024-09-30 23:34:12.718101] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007400 00:16:33.119 [2024-09-30 23:34:12.718196] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:33.119 pt1 00:16:33.119 23:34:12 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.119 23:34:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:16:33.119 23:34:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:33.119 23:34:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:33.119 23:34:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:33.119 23:34:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:33.119 23:34:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:33.119 23:34:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:33.119 23:34:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:33.119 23:34:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:33.119 23:34:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:33.119 23:34:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:33.119 23:34:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:33.119 23:34:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:33.119 23:34:12 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.119 23:34:12 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:33.119 23:34:12 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.119 23:34:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:33.119 "name": "raid_bdev1", 00:16:33.119 "uuid": "970972f4-7cbc-433a-a294-49fbc83a0658", 00:16:33.119 "strip_size_kb": 0, 00:16:33.119 "state": "online", 00:16:33.119 "raid_level": "raid1", 00:16:33.119 "superblock": true, 00:16:33.119 "num_base_bdevs": 2, 00:16:33.119 "num_base_bdevs_discovered": 1, 00:16:33.119 "num_base_bdevs_operational": 1, 00:16:33.119 "base_bdevs_list": [ 00:16:33.119 { 00:16:33.119 "name": null, 00:16:33.119 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:33.119 "is_configured": false, 00:16:33.119 "data_offset": 256, 00:16:33.119 "data_size": 7936 00:16:33.119 }, 00:16:33.119 { 00:16:33.119 "name": "pt2", 00:16:33.119 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:33.119 "is_configured": true, 00:16:33.119 "data_offset": 256, 00:16:33.119 "data_size": 7936 00:16:33.119 } 00:16:33.119 ] 00:16:33.119 }' 00:16:33.119 23:34:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:33.119 23:34:12 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:33.378 23:34:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:16:33.378 23:34:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:16:33.378 23:34:13 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.378 23:34:13 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:33.379 23:34:13 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.379 23:34:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:16:33.379 23:34:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:16:33.379 23:34:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:33.379 23:34:13 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.379 23:34:13 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:33.379 [2024-09-30 23:34:13.218484] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:33.638 23:34:13 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.638 23:34:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # '[' 970972f4-7cbc-433a-a294-49fbc83a0658 '!=' 970972f4-7cbc-433a-a294-49fbc83a0658 ']' 00:16:33.638 23:34:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@563 -- # killprocess 99072 00:16:33.638 23:34:13 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@950 -- # '[' -z 99072 ']' 00:16:33.638 23:34:13 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@954 -- # kill -0 99072 00:16:33.638 23:34:13 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@955 -- # uname 00:16:33.638 23:34:13 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:33.638 23:34:13 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 99072 00:16:33.638 23:34:13 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:33.638 23:34:13 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:33.638 killing process with pid 99072 00:16:33.638 23:34:13 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@968 -- # echo 'killing process with pid 99072' 00:16:33.638 23:34:13 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@969 -- # kill 99072 00:16:33.638 [2024-09-30 23:34:13.268452] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:33.638 [2024-09-30 23:34:13.268518] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:33.638 [2024-09-30 23:34:13.268560] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:33.638 [2024-09-30 23:34:13.268569] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name raid_bdev1, state offline 00:16:33.638 23:34:13 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@974 -- # wait 99072 00:16:33.638 [2024-09-30 23:34:13.312543] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:33.898 23:34:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@565 -- # return 0 00:16:33.898 ************************************ 00:16:33.898 END TEST raid_superblock_test_md_interleaved 00:16:33.898 ************************************ 00:16:33.898 00:16:33.898 real 0m5.159s 00:16:33.898 user 0m8.195s 00:16:33.898 sys 0m1.171s 00:16:33.898 23:34:13 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:33.898 23:34:13 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:34.158 23:34:13 bdev_raid -- bdev/bdev_raid.sh@1013 -- # run_test raid_rebuild_test_sb_md_interleaved raid_rebuild_test raid1 2 true false false 00:16:34.158 23:34:13 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:16:34.158 23:34:13 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:34.158 23:34:13 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:34.158 ************************************ 00:16:34.158 START TEST raid_rebuild_test_sb_md_interleaved 00:16:34.158 ************************************ 00:16:34.158 23:34:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 2 true false false 00:16:34.158 23:34:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:16:34.158 23:34:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:16:34.158 23:34:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:16:34.158 23:34:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:16:34.158 23:34:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # local verify=false 00:16:34.158 23:34:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:16:34.158 23:34:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:34.158 23:34:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:16:34.158 23:34:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:34.158 23:34:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:34.158 23:34:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:16:34.158 23:34:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:34.158 23:34:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:34.158 23:34:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:16:34.158 23:34:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:16:34.158 23:34:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:16:34.158 23:34:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # local strip_size 00:16:34.158 23:34:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@577 -- # local create_arg 00:16:34.158 23:34:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:16:34.158 23:34:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@579 -- # local data_offset 00:16:34.158 23:34:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:16:34.158 23:34:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:16:34.158 23:34:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:16:34.158 23:34:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:16:34.158 23:34:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@597 -- # raid_pid=99389 00:16:34.158 23:34:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:16:34.158 23:34:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@598 -- # waitforlisten 99389 00:16:34.158 23:34:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@831 -- # '[' -z 99389 ']' 00:16:34.158 23:34:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:34.158 23:34:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:34.158 23:34:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:34.158 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:34.158 23:34:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:34.158 23:34:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:34.158 I/O size of 3145728 is greater than zero copy threshold (65536). 00:16:34.158 Zero copy mechanism will not be used. 00:16:34.158 [2024-09-30 23:34:13.871662] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:16:34.158 [2024-09-30 23:34:13.871801] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid99389 ] 00:16:34.418 [2024-09-30 23:34:14.031571] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:34.418 [2024-09-30 23:34:14.104785] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:16:34.418 [2024-09-30 23:34:14.181315] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:34.418 [2024-09-30 23:34:14.181359] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:34.988 23:34:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:34.988 23:34:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@864 -- # return 0 00:16:34.988 23:34:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:34.988 23:34:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1_malloc 00:16:34.988 23:34:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:34.988 23:34:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:34.988 BaseBdev1_malloc 00:16:34.988 23:34:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:34.988 23:34:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:16:34.988 23:34:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:34.988 23:34:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:34.988 [2024-09-30 23:34:14.751975] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:16:34.988 [2024-09-30 23:34:14.752118] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:34.988 [2024-09-30 23:34:14.752167] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:16:34.988 [2024-09-30 23:34:14.752196] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:34.988 [2024-09-30 23:34:14.754345] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:34.988 [2024-09-30 23:34:14.754419] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:34.988 BaseBdev1 00:16:34.988 23:34:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:34.988 23:34:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:34.988 23:34:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2_malloc 00:16:34.988 23:34:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:34.988 23:34:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:34.988 BaseBdev2_malloc 00:16:34.988 23:34:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:34.988 23:34:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:16:34.988 23:34:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:34.988 23:34:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:34.988 [2024-09-30 23:34:14.804007] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:16:34.988 [2024-09-30 23:34:14.804108] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:34.988 [2024-09-30 23:34:14.804155] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:16:34.988 [2024-09-30 23:34:14.804177] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:34.988 [2024-09-30 23:34:14.808421] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:34.988 [2024-09-30 23:34:14.808486] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:16:34.988 BaseBdev2 00:16:34.988 23:34:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:34.988 23:34:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b spare_malloc 00:16:34.988 23:34:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:34.988 23:34:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:34.988 spare_malloc 00:16:34.988 23:34:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:34.988 23:34:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:16:34.988 23:34:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:34.988 23:34:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:35.248 spare_delay 00:16:35.248 23:34:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:35.248 23:34:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:35.248 23:34:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:35.248 23:34:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:35.248 [2024-09-30 23:34:14.853627] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:35.248 [2024-09-30 23:34:14.853768] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:35.248 [2024-09-30 23:34:14.853797] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:16:35.248 [2024-09-30 23:34:14.853807] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:35.248 [2024-09-30 23:34:14.855987] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:35.248 [2024-09-30 23:34:14.856023] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:35.248 spare 00:16:35.248 23:34:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:35.248 23:34:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:16:35.248 23:34:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:35.248 23:34:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:35.248 [2024-09-30 23:34:14.865629] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:35.248 [2024-09-30 23:34:14.867718] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:35.248 [2024-09-30 23:34:14.867950] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:16:35.248 [2024-09-30 23:34:14.867999] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:16:35.248 [2024-09-30 23:34:14.868110] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:16:35.248 [2024-09-30 23:34:14.868206] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:16:35.248 [2024-09-30 23:34:14.868243] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:16:35.248 [2024-09-30 23:34:14.868358] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:35.248 23:34:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:35.248 23:34:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:35.248 23:34:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:35.248 23:34:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:35.248 23:34:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:35.248 23:34:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:35.248 23:34:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:35.248 23:34:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:35.248 23:34:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:35.248 23:34:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:35.248 23:34:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:35.248 23:34:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:35.248 23:34:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:35.249 23:34:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:35.249 23:34:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:35.249 23:34:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:35.249 23:34:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:35.249 "name": "raid_bdev1", 00:16:35.249 "uuid": "b9c409ee-d47f-4036-84ec-7ebcbb3acee2", 00:16:35.249 "strip_size_kb": 0, 00:16:35.249 "state": "online", 00:16:35.249 "raid_level": "raid1", 00:16:35.249 "superblock": true, 00:16:35.249 "num_base_bdevs": 2, 00:16:35.249 "num_base_bdevs_discovered": 2, 00:16:35.249 "num_base_bdevs_operational": 2, 00:16:35.249 "base_bdevs_list": [ 00:16:35.249 { 00:16:35.249 "name": "BaseBdev1", 00:16:35.249 "uuid": "89125335-cdc6-5f89-974d-70b29fb1363e", 00:16:35.249 "is_configured": true, 00:16:35.249 "data_offset": 256, 00:16:35.249 "data_size": 7936 00:16:35.249 }, 00:16:35.249 { 00:16:35.249 "name": "BaseBdev2", 00:16:35.249 "uuid": "8d4fc69d-cced-5f65-8c5e-70d5a1a6649d", 00:16:35.249 "is_configured": true, 00:16:35.249 "data_offset": 256, 00:16:35.249 "data_size": 7936 00:16:35.249 } 00:16:35.249 ] 00:16:35.249 }' 00:16:35.249 23:34:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:35.249 23:34:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:35.509 23:34:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:35.509 23:34:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:16:35.509 23:34:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:35.509 23:34:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:35.509 [2024-09-30 23:34:15.273249] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:35.509 23:34:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:35.509 23:34:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:16:35.509 23:34:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:16:35.509 23:34:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:35.509 23:34:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:35.509 23:34:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:35.509 23:34:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:35.509 23:34:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:16:35.769 23:34:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:16:35.769 23:34:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@624 -- # '[' false = true ']' 00:16:35.769 23:34:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:16:35.769 23:34:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:35.769 23:34:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:35.769 [2024-09-30 23:34:15.368831] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:35.769 23:34:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:35.769 23:34:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:35.769 23:34:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:35.769 23:34:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:35.769 23:34:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:35.769 23:34:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:35.769 23:34:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:35.769 23:34:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:35.769 23:34:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:35.769 23:34:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:35.769 23:34:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:35.769 23:34:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:35.769 23:34:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:35.769 23:34:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:35.769 23:34:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:35.769 23:34:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:35.769 23:34:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:35.769 "name": "raid_bdev1", 00:16:35.769 "uuid": "b9c409ee-d47f-4036-84ec-7ebcbb3acee2", 00:16:35.769 "strip_size_kb": 0, 00:16:35.769 "state": "online", 00:16:35.769 "raid_level": "raid1", 00:16:35.769 "superblock": true, 00:16:35.769 "num_base_bdevs": 2, 00:16:35.769 "num_base_bdevs_discovered": 1, 00:16:35.769 "num_base_bdevs_operational": 1, 00:16:35.769 "base_bdevs_list": [ 00:16:35.769 { 00:16:35.769 "name": null, 00:16:35.769 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:35.769 "is_configured": false, 00:16:35.769 "data_offset": 0, 00:16:35.769 "data_size": 7936 00:16:35.769 }, 00:16:35.769 { 00:16:35.769 "name": "BaseBdev2", 00:16:35.769 "uuid": "8d4fc69d-cced-5f65-8c5e-70d5a1a6649d", 00:16:35.769 "is_configured": true, 00:16:35.769 "data_offset": 256, 00:16:35.769 "data_size": 7936 00:16:35.769 } 00:16:35.769 ] 00:16:35.769 }' 00:16:35.769 23:34:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:35.769 23:34:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:36.029 23:34:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:36.029 23:34:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:36.029 23:34:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:36.029 [2024-09-30 23:34:15.808055] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:36.029 [2024-09-30 23:34:15.813022] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:16:36.029 23:34:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:36.029 23:34:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@647 -- # sleep 1 00:16:36.029 [2024-09-30 23:34:15.815011] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:36.968 23:34:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:36.968 23:34:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:36.968 23:34:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:36.968 23:34:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:36.968 23:34:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:37.228 23:34:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:37.228 23:34:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:37.228 23:34:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:37.228 23:34:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:37.228 23:34:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:37.228 23:34:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:37.228 "name": "raid_bdev1", 00:16:37.228 "uuid": "b9c409ee-d47f-4036-84ec-7ebcbb3acee2", 00:16:37.228 "strip_size_kb": 0, 00:16:37.228 "state": "online", 00:16:37.228 "raid_level": "raid1", 00:16:37.228 "superblock": true, 00:16:37.228 "num_base_bdevs": 2, 00:16:37.228 "num_base_bdevs_discovered": 2, 00:16:37.228 "num_base_bdevs_operational": 2, 00:16:37.228 "process": { 00:16:37.228 "type": "rebuild", 00:16:37.228 "target": "spare", 00:16:37.228 "progress": { 00:16:37.228 "blocks": 2560, 00:16:37.228 "percent": 32 00:16:37.228 } 00:16:37.228 }, 00:16:37.228 "base_bdevs_list": [ 00:16:37.228 { 00:16:37.228 "name": "spare", 00:16:37.228 "uuid": "5ae4a517-0f1f-5e4f-a894-1ef70f88b9b7", 00:16:37.228 "is_configured": true, 00:16:37.228 "data_offset": 256, 00:16:37.228 "data_size": 7936 00:16:37.228 }, 00:16:37.228 { 00:16:37.228 "name": "BaseBdev2", 00:16:37.228 "uuid": "8d4fc69d-cced-5f65-8c5e-70d5a1a6649d", 00:16:37.228 "is_configured": true, 00:16:37.228 "data_offset": 256, 00:16:37.228 "data_size": 7936 00:16:37.228 } 00:16:37.228 ] 00:16:37.228 }' 00:16:37.228 23:34:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:37.228 23:34:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:37.228 23:34:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:37.228 23:34:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:37.228 23:34:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:16:37.228 23:34:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:37.228 23:34:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:37.228 [2024-09-30 23:34:16.980179] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:37.228 [2024-09-30 23:34:17.023461] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:37.228 [2024-09-30 23:34:17.023568] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:37.228 [2024-09-30 23:34:17.023604] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:37.228 [2024-09-30 23:34:17.023626] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:37.228 23:34:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:37.228 23:34:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:37.228 23:34:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:37.228 23:34:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:37.228 23:34:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:37.228 23:34:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:37.228 23:34:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:37.228 23:34:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:37.228 23:34:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:37.228 23:34:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:37.228 23:34:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:37.228 23:34:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:37.228 23:34:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:37.228 23:34:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:37.228 23:34:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:37.228 23:34:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:37.488 23:34:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:37.488 "name": "raid_bdev1", 00:16:37.488 "uuid": "b9c409ee-d47f-4036-84ec-7ebcbb3acee2", 00:16:37.488 "strip_size_kb": 0, 00:16:37.488 "state": "online", 00:16:37.488 "raid_level": "raid1", 00:16:37.488 "superblock": true, 00:16:37.488 "num_base_bdevs": 2, 00:16:37.488 "num_base_bdevs_discovered": 1, 00:16:37.488 "num_base_bdevs_operational": 1, 00:16:37.488 "base_bdevs_list": [ 00:16:37.488 { 00:16:37.488 "name": null, 00:16:37.488 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:37.488 "is_configured": false, 00:16:37.488 "data_offset": 0, 00:16:37.488 "data_size": 7936 00:16:37.488 }, 00:16:37.488 { 00:16:37.488 "name": "BaseBdev2", 00:16:37.488 "uuid": "8d4fc69d-cced-5f65-8c5e-70d5a1a6649d", 00:16:37.488 "is_configured": true, 00:16:37.488 "data_offset": 256, 00:16:37.488 "data_size": 7936 00:16:37.488 } 00:16:37.488 ] 00:16:37.488 }' 00:16:37.488 23:34:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:37.488 23:34:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:37.748 23:34:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:37.748 23:34:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:37.748 23:34:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:37.748 23:34:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:37.748 23:34:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:37.748 23:34:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:37.748 23:34:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:37.748 23:34:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:37.748 23:34:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:37.748 23:34:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:37.748 23:34:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:37.748 "name": "raid_bdev1", 00:16:37.748 "uuid": "b9c409ee-d47f-4036-84ec-7ebcbb3acee2", 00:16:37.748 "strip_size_kb": 0, 00:16:37.748 "state": "online", 00:16:37.748 "raid_level": "raid1", 00:16:37.748 "superblock": true, 00:16:37.748 "num_base_bdevs": 2, 00:16:37.748 "num_base_bdevs_discovered": 1, 00:16:37.748 "num_base_bdevs_operational": 1, 00:16:37.748 "base_bdevs_list": [ 00:16:37.748 { 00:16:37.748 "name": null, 00:16:37.748 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:37.748 "is_configured": false, 00:16:37.748 "data_offset": 0, 00:16:37.748 "data_size": 7936 00:16:37.748 }, 00:16:37.748 { 00:16:37.748 "name": "BaseBdev2", 00:16:37.748 "uuid": "8d4fc69d-cced-5f65-8c5e-70d5a1a6649d", 00:16:37.748 "is_configured": true, 00:16:37.748 "data_offset": 256, 00:16:37.748 "data_size": 7936 00:16:37.748 } 00:16:37.748 ] 00:16:37.748 }' 00:16:37.748 23:34:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:37.748 23:34:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:37.748 23:34:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:37.748 23:34:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:37.748 23:34:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:37.748 23:34:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:37.748 23:34:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:37.748 [2024-09-30 23:34:17.556607] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:37.748 [2024-09-30 23:34:17.560887] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:16:37.748 23:34:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:37.748 23:34:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@663 -- # sleep 1 00:16:37.748 [2024-09-30 23:34:17.562941] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:39.128 23:34:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:39.128 23:34:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:39.128 23:34:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:39.128 23:34:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:39.128 23:34:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:39.128 23:34:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:39.128 23:34:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:39.128 23:34:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:39.128 23:34:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:39.128 23:34:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:39.128 23:34:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:39.128 "name": "raid_bdev1", 00:16:39.128 "uuid": "b9c409ee-d47f-4036-84ec-7ebcbb3acee2", 00:16:39.128 "strip_size_kb": 0, 00:16:39.128 "state": "online", 00:16:39.128 "raid_level": "raid1", 00:16:39.128 "superblock": true, 00:16:39.128 "num_base_bdevs": 2, 00:16:39.128 "num_base_bdevs_discovered": 2, 00:16:39.128 "num_base_bdevs_operational": 2, 00:16:39.128 "process": { 00:16:39.128 "type": "rebuild", 00:16:39.128 "target": "spare", 00:16:39.128 "progress": { 00:16:39.128 "blocks": 2560, 00:16:39.128 "percent": 32 00:16:39.128 } 00:16:39.128 }, 00:16:39.128 "base_bdevs_list": [ 00:16:39.128 { 00:16:39.128 "name": "spare", 00:16:39.128 "uuid": "5ae4a517-0f1f-5e4f-a894-1ef70f88b9b7", 00:16:39.128 "is_configured": true, 00:16:39.128 "data_offset": 256, 00:16:39.128 "data_size": 7936 00:16:39.128 }, 00:16:39.128 { 00:16:39.128 "name": "BaseBdev2", 00:16:39.128 "uuid": "8d4fc69d-cced-5f65-8c5e-70d5a1a6649d", 00:16:39.128 "is_configured": true, 00:16:39.128 "data_offset": 256, 00:16:39.128 "data_size": 7936 00:16:39.128 } 00:16:39.128 ] 00:16:39.128 }' 00:16:39.128 23:34:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:39.128 23:34:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:39.128 23:34:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:39.128 23:34:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:39.128 23:34:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:16:39.128 23:34:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:16:39.128 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:16:39.128 23:34:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:16:39.129 23:34:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:16:39.129 23:34:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:16:39.129 23:34:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@706 -- # local timeout=619 00:16:39.129 23:34:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:39.129 23:34:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:39.129 23:34:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:39.129 23:34:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:39.129 23:34:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:39.129 23:34:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:39.129 23:34:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:39.129 23:34:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:39.129 23:34:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:39.129 23:34:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:39.129 23:34:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:39.129 23:34:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:39.129 "name": "raid_bdev1", 00:16:39.129 "uuid": "b9c409ee-d47f-4036-84ec-7ebcbb3acee2", 00:16:39.129 "strip_size_kb": 0, 00:16:39.129 "state": "online", 00:16:39.129 "raid_level": "raid1", 00:16:39.129 "superblock": true, 00:16:39.129 "num_base_bdevs": 2, 00:16:39.129 "num_base_bdevs_discovered": 2, 00:16:39.129 "num_base_bdevs_operational": 2, 00:16:39.129 "process": { 00:16:39.129 "type": "rebuild", 00:16:39.129 "target": "spare", 00:16:39.129 "progress": { 00:16:39.129 "blocks": 2816, 00:16:39.129 "percent": 35 00:16:39.129 } 00:16:39.129 }, 00:16:39.129 "base_bdevs_list": [ 00:16:39.129 { 00:16:39.129 "name": "spare", 00:16:39.129 "uuid": "5ae4a517-0f1f-5e4f-a894-1ef70f88b9b7", 00:16:39.129 "is_configured": true, 00:16:39.129 "data_offset": 256, 00:16:39.129 "data_size": 7936 00:16:39.129 }, 00:16:39.129 { 00:16:39.129 "name": "BaseBdev2", 00:16:39.129 "uuid": "8d4fc69d-cced-5f65-8c5e-70d5a1a6649d", 00:16:39.129 "is_configured": true, 00:16:39.129 "data_offset": 256, 00:16:39.129 "data_size": 7936 00:16:39.129 } 00:16:39.129 ] 00:16:39.129 }' 00:16:39.129 23:34:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:39.129 23:34:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:39.129 23:34:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:39.129 23:34:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:39.129 23:34:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:40.065 23:34:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:40.065 23:34:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:40.065 23:34:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:40.065 23:34:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:40.065 23:34:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:40.065 23:34:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:40.065 23:34:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:40.065 23:34:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:40.065 23:34:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.065 23:34:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:40.065 23:34:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.065 23:34:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:40.065 "name": "raid_bdev1", 00:16:40.065 "uuid": "b9c409ee-d47f-4036-84ec-7ebcbb3acee2", 00:16:40.065 "strip_size_kb": 0, 00:16:40.065 "state": "online", 00:16:40.065 "raid_level": "raid1", 00:16:40.065 "superblock": true, 00:16:40.065 "num_base_bdevs": 2, 00:16:40.065 "num_base_bdevs_discovered": 2, 00:16:40.065 "num_base_bdevs_operational": 2, 00:16:40.065 "process": { 00:16:40.065 "type": "rebuild", 00:16:40.065 "target": "spare", 00:16:40.065 "progress": { 00:16:40.065 "blocks": 5632, 00:16:40.065 "percent": 70 00:16:40.065 } 00:16:40.065 }, 00:16:40.065 "base_bdevs_list": [ 00:16:40.065 { 00:16:40.065 "name": "spare", 00:16:40.065 "uuid": "5ae4a517-0f1f-5e4f-a894-1ef70f88b9b7", 00:16:40.065 "is_configured": true, 00:16:40.065 "data_offset": 256, 00:16:40.065 "data_size": 7936 00:16:40.065 }, 00:16:40.065 { 00:16:40.065 "name": "BaseBdev2", 00:16:40.065 "uuid": "8d4fc69d-cced-5f65-8c5e-70d5a1a6649d", 00:16:40.065 "is_configured": true, 00:16:40.065 "data_offset": 256, 00:16:40.065 "data_size": 7936 00:16:40.065 } 00:16:40.065 ] 00:16:40.065 }' 00:16:40.324 23:34:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:40.324 23:34:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:40.324 23:34:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:40.324 23:34:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:40.324 23:34:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:40.892 [2024-09-30 23:34:20.683135] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:16:40.892 [2024-09-30 23:34:20.683217] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:16:40.892 [2024-09-30 23:34:20.683328] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:41.151 23:34:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:41.151 23:34:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:41.151 23:34:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:41.151 23:34:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:41.151 23:34:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:41.151 23:34:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:41.151 23:34:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:41.151 23:34:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:41.410 23:34:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.410 23:34:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:41.410 23:34:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.410 23:34:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:41.410 "name": "raid_bdev1", 00:16:41.410 "uuid": "b9c409ee-d47f-4036-84ec-7ebcbb3acee2", 00:16:41.410 "strip_size_kb": 0, 00:16:41.410 "state": "online", 00:16:41.410 "raid_level": "raid1", 00:16:41.410 "superblock": true, 00:16:41.410 "num_base_bdevs": 2, 00:16:41.410 "num_base_bdevs_discovered": 2, 00:16:41.410 "num_base_bdevs_operational": 2, 00:16:41.410 "base_bdevs_list": [ 00:16:41.410 { 00:16:41.410 "name": "spare", 00:16:41.410 "uuid": "5ae4a517-0f1f-5e4f-a894-1ef70f88b9b7", 00:16:41.410 "is_configured": true, 00:16:41.410 "data_offset": 256, 00:16:41.410 "data_size": 7936 00:16:41.410 }, 00:16:41.410 { 00:16:41.410 "name": "BaseBdev2", 00:16:41.410 "uuid": "8d4fc69d-cced-5f65-8c5e-70d5a1a6649d", 00:16:41.410 "is_configured": true, 00:16:41.410 "data_offset": 256, 00:16:41.410 "data_size": 7936 00:16:41.410 } 00:16:41.410 ] 00:16:41.410 }' 00:16:41.410 23:34:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:41.410 23:34:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:16:41.410 23:34:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:41.410 23:34:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:16:41.410 23:34:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@709 -- # break 00:16:41.410 23:34:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:41.411 23:34:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:41.411 23:34:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:41.411 23:34:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:41.411 23:34:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:41.411 23:34:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:41.411 23:34:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:41.411 23:34:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.411 23:34:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:41.411 23:34:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.411 23:34:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:41.411 "name": "raid_bdev1", 00:16:41.411 "uuid": "b9c409ee-d47f-4036-84ec-7ebcbb3acee2", 00:16:41.411 "strip_size_kb": 0, 00:16:41.411 "state": "online", 00:16:41.411 "raid_level": "raid1", 00:16:41.411 "superblock": true, 00:16:41.411 "num_base_bdevs": 2, 00:16:41.411 "num_base_bdevs_discovered": 2, 00:16:41.411 "num_base_bdevs_operational": 2, 00:16:41.411 "base_bdevs_list": [ 00:16:41.411 { 00:16:41.411 "name": "spare", 00:16:41.411 "uuid": "5ae4a517-0f1f-5e4f-a894-1ef70f88b9b7", 00:16:41.411 "is_configured": true, 00:16:41.411 "data_offset": 256, 00:16:41.411 "data_size": 7936 00:16:41.411 }, 00:16:41.411 { 00:16:41.411 "name": "BaseBdev2", 00:16:41.411 "uuid": "8d4fc69d-cced-5f65-8c5e-70d5a1a6649d", 00:16:41.411 "is_configured": true, 00:16:41.411 "data_offset": 256, 00:16:41.411 "data_size": 7936 00:16:41.411 } 00:16:41.411 ] 00:16:41.411 }' 00:16:41.411 23:34:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:41.411 23:34:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:41.411 23:34:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:41.411 23:34:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:41.411 23:34:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:41.411 23:34:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:41.411 23:34:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:41.411 23:34:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:41.411 23:34:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:41.411 23:34:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:41.411 23:34:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:41.411 23:34:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:41.411 23:34:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:41.411 23:34:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:41.411 23:34:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:41.411 23:34:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:41.411 23:34:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.411 23:34:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:41.670 23:34:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.670 23:34:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:41.670 "name": "raid_bdev1", 00:16:41.670 "uuid": "b9c409ee-d47f-4036-84ec-7ebcbb3acee2", 00:16:41.670 "strip_size_kb": 0, 00:16:41.670 "state": "online", 00:16:41.670 "raid_level": "raid1", 00:16:41.670 "superblock": true, 00:16:41.670 "num_base_bdevs": 2, 00:16:41.670 "num_base_bdevs_discovered": 2, 00:16:41.670 "num_base_bdevs_operational": 2, 00:16:41.670 "base_bdevs_list": [ 00:16:41.670 { 00:16:41.670 "name": "spare", 00:16:41.670 "uuid": "5ae4a517-0f1f-5e4f-a894-1ef70f88b9b7", 00:16:41.670 "is_configured": true, 00:16:41.670 "data_offset": 256, 00:16:41.670 "data_size": 7936 00:16:41.670 }, 00:16:41.670 { 00:16:41.670 "name": "BaseBdev2", 00:16:41.670 "uuid": "8d4fc69d-cced-5f65-8c5e-70d5a1a6649d", 00:16:41.670 "is_configured": true, 00:16:41.670 "data_offset": 256, 00:16:41.670 "data_size": 7936 00:16:41.670 } 00:16:41.670 ] 00:16:41.670 }' 00:16:41.670 23:34:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:41.670 23:34:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:41.930 23:34:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:41.930 23:34:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.930 23:34:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:41.930 [2024-09-30 23:34:21.683515] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:41.930 [2024-09-30 23:34:21.683547] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:41.930 [2024-09-30 23:34:21.683667] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:41.930 [2024-09-30 23:34:21.683732] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:41.930 [2024-09-30 23:34:21.683744] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:16:41.930 23:34:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.930 23:34:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:41.930 23:34:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # jq length 00:16:41.930 23:34:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.930 23:34:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:41.930 23:34:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.930 23:34:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:16:41.930 23:34:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@722 -- # '[' false = true ']' 00:16:41.930 23:34:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:16:41.930 23:34:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:16:41.930 23:34:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.930 23:34:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:41.930 23:34:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.930 23:34:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:41.930 23:34:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.930 23:34:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:41.930 [2024-09-30 23:34:21.755390] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:41.930 [2024-09-30 23:34:21.755446] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:41.930 [2024-09-30 23:34:21.755466] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:16:41.930 [2024-09-30 23:34:21.755478] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:41.930 [2024-09-30 23:34:21.757600] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:41.930 [2024-09-30 23:34:21.757635] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:41.930 [2024-09-30 23:34:21.757687] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:16:41.930 [2024-09-30 23:34:21.757729] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:41.930 [2024-09-30 23:34:21.757833] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:41.930 spare 00:16:41.930 23:34:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.930 23:34:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:16:41.930 23:34:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.930 23:34:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:42.190 [2024-09-30 23:34:21.857734] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006600 00:16:42.190 [2024-09-30 23:34:21.857760] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:16:42.190 [2024-09-30 23:34:21.857854] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:16:42.190 [2024-09-30 23:34:21.857946] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006600 00:16:42.190 [2024-09-30 23:34:21.857960] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006600 00:16:42.190 [2024-09-30 23:34:21.858041] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:42.190 23:34:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.190 23:34:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:42.190 23:34:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:42.190 23:34:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:42.190 23:34:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:42.190 23:34:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:42.190 23:34:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:42.190 23:34:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:42.190 23:34:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:42.190 23:34:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:42.190 23:34:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:42.190 23:34:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:42.190 23:34:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:42.190 23:34:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.190 23:34:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:42.190 23:34:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.190 23:34:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:42.190 "name": "raid_bdev1", 00:16:42.190 "uuid": "b9c409ee-d47f-4036-84ec-7ebcbb3acee2", 00:16:42.190 "strip_size_kb": 0, 00:16:42.190 "state": "online", 00:16:42.190 "raid_level": "raid1", 00:16:42.190 "superblock": true, 00:16:42.190 "num_base_bdevs": 2, 00:16:42.190 "num_base_bdevs_discovered": 2, 00:16:42.190 "num_base_bdevs_operational": 2, 00:16:42.190 "base_bdevs_list": [ 00:16:42.190 { 00:16:42.190 "name": "spare", 00:16:42.190 "uuid": "5ae4a517-0f1f-5e4f-a894-1ef70f88b9b7", 00:16:42.190 "is_configured": true, 00:16:42.190 "data_offset": 256, 00:16:42.190 "data_size": 7936 00:16:42.190 }, 00:16:42.190 { 00:16:42.190 "name": "BaseBdev2", 00:16:42.190 "uuid": "8d4fc69d-cced-5f65-8c5e-70d5a1a6649d", 00:16:42.190 "is_configured": true, 00:16:42.190 "data_offset": 256, 00:16:42.190 "data_size": 7936 00:16:42.190 } 00:16:42.190 ] 00:16:42.190 }' 00:16:42.190 23:34:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:42.190 23:34:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:42.759 23:34:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:42.760 23:34:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:42.760 23:34:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:42.760 23:34:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:42.760 23:34:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:42.760 23:34:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:42.760 23:34:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.760 23:34:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:42.760 23:34:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:42.760 23:34:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.760 23:34:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:42.760 "name": "raid_bdev1", 00:16:42.760 "uuid": "b9c409ee-d47f-4036-84ec-7ebcbb3acee2", 00:16:42.760 "strip_size_kb": 0, 00:16:42.760 "state": "online", 00:16:42.760 "raid_level": "raid1", 00:16:42.760 "superblock": true, 00:16:42.760 "num_base_bdevs": 2, 00:16:42.760 "num_base_bdevs_discovered": 2, 00:16:42.760 "num_base_bdevs_operational": 2, 00:16:42.760 "base_bdevs_list": [ 00:16:42.760 { 00:16:42.760 "name": "spare", 00:16:42.760 "uuid": "5ae4a517-0f1f-5e4f-a894-1ef70f88b9b7", 00:16:42.760 "is_configured": true, 00:16:42.760 "data_offset": 256, 00:16:42.760 "data_size": 7936 00:16:42.760 }, 00:16:42.760 { 00:16:42.760 "name": "BaseBdev2", 00:16:42.760 "uuid": "8d4fc69d-cced-5f65-8c5e-70d5a1a6649d", 00:16:42.760 "is_configured": true, 00:16:42.760 "data_offset": 256, 00:16:42.760 "data_size": 7936 00:16:42.760 } 00:16:42.760 ] 00:16:42.760 }' 00:16:42.760 23:34:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:42.760 23:34:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:42.760 23:34:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:42.760 23:34:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:42.760 23:34:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:42.760 23:34:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.760 23:34:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:42.760 23:34:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:16:42.760 23:34:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.760 23:34:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:16:42.760 23:34:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:16:42.760 23:34:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.760 23:34:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:42.760 [2024-09-30 23:34:22.502138] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:42.760 23:34:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.760 23:34:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:42.760 23:34:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:42.760 23:34:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:42.760 23:34:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:42.760 23:34:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:42.760 23:34:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:42.760 23:34:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:42.760 23:34:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:42.760 23:34:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:42.760 23:34:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:42.760 23:34:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:42.760 23:34:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.760 23:34:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:42.760 23:34:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:42.760 23:34:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.760 23:34:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:42.760 "name": "raid_bdev1", 00:16:42.760 "uuid": "b9c409ee-d47f-4036-84ec-7ebcbb3acee2", 00:16:42.760 "strip_size_kb": 0, 00:16:42.760 "state": "online", 00:16:42.760 "raid_level": "raid1", 00:16:42.760 "superblock": true, 00:16:42.760 "num_base_bdevs": 2, 00:16:42.760 "num_base_bdevs_discovered": 1, 00:16:42.760 "num_base_bdevs_operational": 1, 00:16:42.760 "base_bdevs_list": [ 00:16:42.760 { 00:16:42.760 "name": null, 00:16:42.760 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:42.760 "is_configured": false, 00:16:42.760 "data_offset": 0, 00:16:42.760 "data_size": 7936 00:16:42.760 }, 00:16:42.760 { 00:16:42.760 "name": "BaseBdev2", 00:16:42.760 "uuid": "8d4fc69d-cced-5f65-8c5e-70d5a1a6649d", 00:16:42.760 "is_configured": true, 00:16:42.760 "data_offset": 256, 00:16:42.760 "data_size": 7936 00:16:42.760 } 00:16:42.760 ] 00:16:42.760 }' 00:16:42.760 23:34:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:42.760 23:34:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:43.331 23:34:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:43.331 23:34:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.331 23:34:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:43.331 [2024-09-30 23:34:22.949427] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:43.331 [2024-09-30 23:34:22.949608] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:16:43.331 [2024-09-30 23:34:22.949624] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:16:43.331 [2024-09-30 23:34:22.949663] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:43.331 [2024-09-30 23:34:22.954453] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:16:43.331 23:34:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.331 23:34:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@757 -- # sleep 1 00:16:43.331 [2024-09-30 23:34:22.956454] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:44.309 23:34:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:44.309 23:34:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:44.309 23:34:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:44.309 23:34:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:44.309 23:34:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:44.309 23:34:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:44.309 23:34:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:44.309 23:34:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.309 23:34:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:44.309 23:34:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.309 23:34:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:44.309 "name": "raid_bdev1", 00:16:44.309 "uuid": "b9c409ee-d47f-4036-84ec-7ebcbb3acee2", 00:16:44.309 "strip_size_kb": 0, 00:16:44.309 "state": "online", 00:16:44.309 "raid_level": "raid1", 00:16:44.309 "superblock": true, 00:16:44.309 "num_base_bdevs": 2, 00:16:44.309 "num_base_bdevs_discovered": 2, 00:16:44.309 "num_base_bdevs_operational": 2, 00:16:44.309 "process": { 00:16:44.309 "type": "rebuild", 00:16:44.309 "target": "spare", 00:16:44.309 "progress": { 00:16:44.309 "blocks": 2560, 00:16:44.309 "percent": 32 00:16:44.309 } 00:16:44.309 }, 00:16:44.309 "base_bdevs_list": [ 00:16:44.309 { 00:16:44.309 "name": "spare", 00:16:44.309 "uuid": "5ae4a517-0f1f-5e4f-a894-1ef70f88b9b7", 00:16:44.309 "is_configured": true, 00:16:44.309 "data_offset": 256, 00:16:44.309 "data_size": 7936 00:16:44.309 }, 00:16:44.309 { 00:16:44.309 "name": "BaseBdev2", 00:16:44.309 "uuid": "8d4fc69d-cced-5f65-8c5e-70d5a1a6649d", 00:16:44.309 "is_configured": true, 00:16:44.309 "data_offset": 256, 00:16:44.309 "data_size": 7936 00:16:44.309 } 00:16:44.309 ] 00:16:44.309 }' 00:16:44.309 23:34:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:44.309 23:34:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:44.309 23:34:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:44.309 23:34:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:44.309 23:34:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:16:44.309 23:34:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.309 23:34:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:44.309 [2024-09-30 23:34:24.101211] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:44.579 [2024-09-30 23:34:24.164182] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:44.579 [2024-09-30 23:34:24.164235] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:44.579 [2024-09-30 23:34:24.164253] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:44.579 [2024-09-30 23:34:24.164260] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:44.579 23:34:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.579 23:34:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:44.579 23:34:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:44.579 23:34:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:44.579 23:34:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:44.579 23:34:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:44.579 23:34:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:44.579 23:34:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:44.579 23:34:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:44.579 23:34:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:44.579 23:34:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:44.579 23:34:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:44.579 23:34:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:44.579 23:34:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.579 23:34:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:44.579 23:34:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.579 23:34:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:44.579 "name": "raid_bdev1", 00:16:44.579 "uuid": "b9c409ee-d47f-4036-84ec-7ebcbb3acee2", 00:16:44.579 "strip_size_kb": 0, 00:16:44.579 "state": "online", 00:16:44.579 "raid_level": "raid1", 00:16:44.579 "superblock": true, 00:16:44.579 "num_base_bdevs": 2, 00:16:44.579 "num_base_bdevs_discovered": 1, 00:16:44.579 "num_base_bdevs_operational": 1, 00:16:44.579 "base_bdevs_list": [ 00:16:44.579 { 00:16:44.579 "name": null, 00:16:44.579 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:44.579 "is_configured": false, 00:16:44.579 "data_offset": 0, 00:16:44.579 "data_size": 7936 00:16:44.579 }, 00:16:44.579 { 00:16:44.579 "name": "BaseBdev2", 00:16:44.579 "uuid": "8d4fc69d-cced-5f65-8c5e-70d5a1a6649d", 00:16:44.579 "is_configured": true, 00:16:44.579 "data_offset": 256, 00:16:44.579 "data_size": 7936 00:16:44.579 } 00:16:44.579 ] 00:16:44.579 }' 00:16:44.579 23:34:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:44.579 23:34:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:44.839 23:34:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:44.839 23:34:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.839 23:34:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:44.839 [2024-09-30 23:34:24.649597] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:44.839 [2024-09-30 23:34:24.649651] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:44.839 [2024-09-30 23:34:24.649677] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:16:44.839 [2024-09-30 23:34:24.649687] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:44.839 [2024-09-30 23:34:24.649915] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:44.839 [2024-09-30 23:34:24.649930] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:44.839 [2024-09-30 23:34:24.649984] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:16:44.839 [2024-09-30 23:34:24.649996] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:16:44.839 [2024-09-30 23:34:24.650010] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:16:44.839 [2024-09-30 23:34:24.650031] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:44.839 [2024-09-30 23:34:24.653714] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:16:44.839 spare 00:16:44.839 23:34:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.839 23:34:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@764 -- # sleep 1 00:16:44.839 [2024-09-30 23:34:24.655772] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:46.220 23:34:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:46.220 23:34:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:46.220 23:34:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:46.220 23:34:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:46.220 23:34:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:46.220 23:34:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:46.220 23:34:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.220 23:34:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:46.220 23:34:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:46.220 23:34:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.220 23:34:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:46.220 "name": "raid_bdev1", 00:16:46.220 "uuid": "b9c409ee-d47f-4036-84ec-7ebcbb3acee2", 00:16:46.220 "strip_size_kb": 0, 00:16:46.220 "state": "online", 00:16:46.220 "raid_level": "raid1", 00:16:46.220 "superblock": true, 00:16:46.220 "num_base_bdevs": 2, 00:16:46.220 "num_base_bdevs_discovered": 2, 00:16:46.220 "num_base_bdevs_operational": 2, 00:16:46.220 "process": { 00:16:46.220 "type": "rebuild", 00:16:46.220 "target": "spare", 00:16:46.220 "progress": { 00:16:46.220 "blocks": 2560, 00:16:46.220 "percent": 32 00:16:46.220 } 00:16:46.220 }, 00:16:46.220 "base_bdevs_list": [ 00:16:46.220 { 00:16:46.220 "name": "spare", 00:16:46.220 "uuid": "5ae4a517-0f1f-5e4f-a894-1ef70f88b9b7", 00:16:46.220 "is_configured": true, 00:16:46.220 "data_offset": 256, 00:16:46.220 "data_size": 7936 00:16:46.220 }, 00:16:46.220 { 00:16:46.220 "name": "BaseBdev2", 00:16:46.220 "uuid": "8d4fc69d-cced-5f65-8c5e-70d5a1a6649d", 00:16:46.220 "is_configured": true, 00:16:46.220 "data_offset": 256, 00:16:46.220 "data_size": 7936 00:16:46.220 } 00:16:46.220 ] 00:16:46.220 }' 00:16:46.220 23:34:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:46.220 23:34:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:46.220 23:34:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:46.220 23:34:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:46.220 23:34:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:16:46.220 23:34:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.220 23:34:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:46.220 [2024-09-30 23:34:25.817174] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:46.220 [2024-09-30 23:34:25.863297] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:46.220 [2024-09-30 23:34:25.863352] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:46.220 [2024-09-30 23:34:25.863366] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:46.220 [2024-09-30 23:34:25.863376] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:46.220 23:34:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.220 23:34:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:46.220 23:34:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:46.220 23:34:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:46.220 23:34:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:46.220 23:34:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:46.220 23:34:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:46.220 23:34:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:46.220 23:34:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:46.220 23:34:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:46.220 23:34:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:46.220 23:34:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:46.220 23:34:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:46.220 23:34:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.220 23:34:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:46.220 23:34:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.220 23:34:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:46.220 "name": "raid_bdev1", 00:16:46.220 "uuid": "b9c409ee-d47f-4036-84ec-7ebcbb3acee2", 00:16:46.220 "strip_size_kb": 0, 00:16:46.220 "state": "online", 00:16:46.220 "raid_level": "raid1", 00:16:46.220 "superblock": true, 00:16:46.220 "num_base_bdevs": 2, 00:16:46.220 "num_base_bdevs_discovered": 1, 00:16:46.220 "num_base_bdevs_operational": 1, 00:16:46.220 "base_bdevs_list": [ 00:16:46.220 { 00:16:46.220 "name": null, 00:16:46.220 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:46.220 "is_configured": false, 00:16:46.220 "data_offset": 0, 00:16:46.220 "data_size": 7936 00:16:46.220 }, 00:16:46.220 { 00:16:46.220 "name": "BaseBdev2", 00:16:46.220 "uuid": "8d4fc69d-cced-5f65-8c5e-70d5a1a6649d", 00:16:46.220 "is_configured": true, 00:16:46.220 "data_offset": 256, 00:16:46.220 "data_size": 7936 00:16:46.220 } 00:16:46.220 ] 00:16:46.220 }' 00:16:46.220 23:34:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:46.220 23:34:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:46.479 23:34:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:46.479 23:34:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:46.479 23:34:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:46.479 23:34:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:46.479 23:34:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:46.480 23:34:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:46.480 23:34:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.480 23:34:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:46.480 23:34:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:46.480 23:34:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.739 23:34:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:46.739 "name": "raid_bdev1", 00:16:46.739 "uuid": "b9c409ee-d47f-4036-84ec-7ebcbb3acee2", 00:16:46.739 "strip_size_kb": 0, 00:16:46.740 "state": "online", 00:16:46.740 "raid_level": "raid1", 00:16:46.740 "superblock": true, 00:16:46.740 "num_base_bdevs": 2, 00:16:46.740 "num_base_bdevs_discovered": 1, 00:16:46.740 "num_base_bdevs_operational": 1, 00:16:46.740 "base_bdevs_list": [ 00:16:46.740 { 00:16:46.740 "name": null, 00:16:46.740 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:46.740 "is_configured": false, 00:16:46.740 "data_offset": 0, 00:16:46.740 "data_size": 7936 00:16:46.740 }, 00:16:46.740 { 00:16:46.740 "name": "BaseBdev2", 00:16:46.740 "uuid": "8d4fc69d-cced-5f65-8c5e-70d5a1a6649d", 00:16:46.740 "is_configured": true, 00:16:46.740 "data_offset": 256, 00:16:46.740 "data_size": 7936 00:16:46.740 } 00:16:46.740 ] 00:16:46.740 }' 00:16:46.740 23:34:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:46.740 23:34:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:46.740 23:34:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:46.740 23:34:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:46.740 23:34:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:16:46.740 23:34:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.740 23:34:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:46.740 23:34:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.740 23:34:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:16:46.740 23:34:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.740 23:34:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:46.740 [2024-09-30 23:34:26.440223] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:16:46.740 [2024-09-30 23:34:26.440278] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:46.740 [2024-09-30 23:34:26.440298] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:16:46.740 [2024-09-30 23:34:26.440310] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:46.740 [2024-09-30 23:34:26.440473] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:46.740 [2024-09-30 23:34:26.440487] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:46.740 [2024-09-30 23:34:26.440534] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:16:46.740 [2024-09-30 23:34:26.440562] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:16:46.740 [2024-09-30 23:34:26.440577] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:16:46.740 [2024-09-30 23:34:26.440594] bdev_raid.c:3884:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:16:46.740 BaseBdev1 00:16:46.740 23:34:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.740 23:34:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@775 -- # sleep 1 00:16:47.679 23:34:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:47.679 23:34:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:47.679 23:34:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:47.679 23:34:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:47.679 23:34:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:47.679 23:34:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:47.679 23:34:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:47.679 23:34:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:47.679 23:34:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:47.679 23:34:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:47.679 23:34:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:47.679 23:34:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.679 23:34:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:47.679 23:34:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:47.679 23:34:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.679 23:34:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:47.679 "name": "raid_bdev1", 00:16:47.679 "uuid": "b9c409ee-d47f-4036-84ec-7ebcbb3acee2", 00:16:47.679 "strip_size_kb": 0, 00:16:47.679 "state": "online", 00:16:47.679 "raid_level": "raid1", 00:16:47.679 "superblock": true, 00:16:47.679 "num_base_bdevs": 2, 00:16:47.679 "num_base_bdevs_discovered": 1, 00:16:47.679 "num_base_bdevs_operational": 1, 00:16:47.679 "base_bdevs_list": [ 00:16:47.679 { 00:16:47.679 "name": null, 00:16:47.679 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:47.679 "is_configured": false, 00:16:47.679 "data_offset": 0, 00:16:47.679 "data_size": 7936 00:16:47.679 }, 00:16:47.679 { 00:16:47.679 "name": "BaseBdev2", 00:16:47.679 "uuid": "8d4fc69d-cced-5f65-8c5e-70d5a1a6649d", 00:16:47.679 "is_configured": true, 00:16:47.679 "data_offset": 256, 00:16:47.679 "data_size": 7936 00:16:47.679 } 00:16:47.679 ] 00:16:47.679 }' 00:16:47.679 23:34:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:47.679 23:34:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:48.248 23:34:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:48.248 23:34:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:48.248 23:34:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:48.248 23:34:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:48.248 23:34:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:48.248 23:34:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:48.248 23:34:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:48.248 23:34:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:48.248 23:34:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:48.248 23:34:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:48.248 23:34:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:48.248 "name": "raid_bdev1", 00:16:48.248 "uuid": "b9c409ee-d47f-4036-84ec-7ebcbb3acee2", 00:16:48.248 "strip_size_kb": 0, 00:16:48.248 "state": "online", 00:16:48.248 "raid_level": "raid1", 00:16:48.248 "superblock": true, 00:16:48.249 "num_base_bdevs": 2, 00:16:48.249 "num_base_bdevs_discovered": 1, 00:16:48.249 "num_base_bdevs_operational": 1, 00:16:48.249 "base_bdevs_list": [ 00:16:48.249 { 00:16:48.249 "name": null, 00:16:48.249 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:48.249 "is_configured": false, 00:16:48.249 "data_offset": 0, 00:16:48.249 "data_size": 7936 00:16:48.249 }, 00:16:48.249 { 00:16:48.249 "name": "BaseBdev2", 00:16:48.249 "uuid": "8d4fc69d-cced-5f65-8c5e-70d5a1a6649d", 00:16:48.249 "is_configured": true, 00:16:48.249 "data_offset": 256, 00:16:48.249 "data_size": 7936 00:16:48.249 } 00:16:48.249 ] 00:16:48.249 }' 00:16:48.249 23:34:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:48.249 23:34:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:48.249 23:34:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:48.249 23:34:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:48.249 23:34:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:48.249 23:34:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@650 -- # local es=0 00:16:48.249 23:34:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:48.249 23:34:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:16:48.249 23:34:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:48.249 23:34:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:16:48.249 23:34:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:48.249 23:34:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:48.249 23:34:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:48.249 23:34:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:48.249 [2024-09-30 23:34:28.041501] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:48.249 [2024-09-30 23:34:28.041658] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:16:48.249 [2024-09-30 23:34:28.041670] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:16:48.249 request: 00:16:48.249 { 00:16:48.249 "base_bdev": "BaseBdev1", 00:16:48.249 "raid_bdev": "raid_bdev1", 00:16:48.249 "method": "bdev_raid_add_base_bdev", 00:16:48.249 "req_id": 1 00:16:48.249 } 00:16:48.249 Got JSON-RPC error response 00:16:48.249 response: 00:16:48.249 { 00:16:48.249 "code": -22, 00:16:48.249 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:16:48.249 } 00:16:48.249 23:34:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:16:48.249 23:34:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@653 -- # es=1 00:16:48.249 23:34:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:48.249 23:34:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:48.249 23:34:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:48.249 23:34:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@779 -- # sleep 1 00:16:49.629 23:34:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:49.629 23:34:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:49.629 23:34:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:49.629 23:34:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:49.629 23:34:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:49.629 23:34:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:49.629 23:34:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:49.629 23:34:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:49.629 23:34:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:49.629 23:34:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:49.629 23:34:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:49.629 23:34:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.629 23:34:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:49.629 23:34:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:49.629 23:34:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.629 23:34:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:49.629 "name": "raid_bdev1", 00:16:49.629 "uuid": "b9c409ee-d47f-4036-84ec-7ebcbb3acee2", 00:16:49.629 "strip_size_kb": 0, 00:16:49.629 "state": "online", 00:16:49.629 "raid_level": "raid1", 00:16:49.629 "superblock": true, 00:16:49.629 "num_base_bdevs": 2, 00:16:49.629 "num_base_bdevs_discovered": 1, 00:16:49.629 "num_base_bdevs_operational": 1, 00:16:49.629 "base_bdevs_list": [ 00:16:49.629 { 00:16:49.629 "name": null, 00:16:49.629 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:49.629 "is_configured": false, 00:16:49.629 "data_offset": 0, 00:16:49.629 "data_size": 7936 00:16:49.629 }, 00:16:49.629 { 00:16:49.629 "name": "BaseBdev2", 00:16:49.629 "uuid": "8d4fc69d-cced-5f65-8c5e-70d5a1a6649d", 00:16:49.629 "is_configured": true, 00:16:49.629 "data_offset": 256, 00:16:49.629 "data_size": 7936 00:16:49.629 } 00:16:49.629 ] 00:16:49.629 }' 00:16:49.629 23:34:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:49.629 23:34:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:49.888 23:34:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:49.888 23:34:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:49.888 23:34:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:49.888 23:34:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:49.888 23:34:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:49.888 23:34:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:49.888 23:34:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:49.888 23:34:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.888 23:34:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:49.888 23:34:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.888 23:34:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:49.888 "name": "raid_bdev1", 00:16:49.888 "uuid": "b9c409ee-d47f-4036-84ec-7ebcbb3acee2", 00:16:49.888 "strip_size_kb": 0, 00:16:49.888 "state": "online", 00:16:49.888 "raid_level": "raid1", 00:16:49.888 "superblock": true, 00:16:49.888 "num_base_bdevs": 2, 00:16:49.888 "num_base_bdevs_discovered": 1, 00:16:49.888 "num_base_bdevs_operational": 1, 00:16:49.888 "base_bdevs_list": [ 00:16:49.888 { 00:16:49.888 "name": null, 00:16:49.888 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:49.888 "is_configured": false, 00:16:49.888 "data_offset": 0, 00:16:49.888 "data_size": 7936 00:16:49.888 }, 00:16:49.888 { 00:16:49.888 "name": "BaseBdev2", 00:16:49.888 "uuid": "8d4fc69d-cced-5f65-8c5e-70d5a1a6649d", 00:16:49.888 "is_configured": true, 00:16:49.888 "data_offset": 256, 00:16:49.888 "data_size": 7936 00:16:49.888 } 00:16:49.888 ] 00:16:49.888 }' 00:16:49.888 23:34:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:49.888 23:34:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:49.888 23:34:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:49.888 23:34:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:49.888 23:34:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@784 -- # killprocess 99389 00:16:49.888 23:34:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@950 -- # '[' -z 99389 ']' 00:16:49.888 23:34:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@954 -- # kill -0 99389 00:16:49.888 23:34:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@955 -- # uname 00:16:49.888 23:34:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:49.888 23:34:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 99389 00:16:49.888 killing process with pid 99389 00:16:49.888 Received shutdown signal, test time was about 60.000000 seconds 00:16:49.888 00:16:49.888 Latency(us) 00:16:49.888 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:49.888 =================================================================================================================== 00:16:49.888 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:49.888 23:34:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:49.888 23:34:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:49.889 23:34:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@968 -- # echo 'killing process with pid 99389' 00:16:49.889 23:34:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@969 -- # kill 99389 00:16:49.889 [2024-09-30 23:34:29.650621] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:49.889 [2024-09-30 23:34:29.650749] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:49.889 23:34:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@974 -- # wait 99389 00:16:49.889 [2024-09-30 23:34:29.650802] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:49.889 [2024-09-30 23:34:29.650811] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state offline 00:16:49.889 [2024-09-30 23:34:29.712547] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:50.458 23:34:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@786 -- # return 0 00:16:50.458 00:16:50.458 real 0m16.298s 00:16:50.458 user 0m21.632s 00:16:50.458 sys 0m1.699s 00:16:50.458 23:34:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:50.458 23:34:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:50.458 ************************************ 00:16:50.458 END TEST raid_rebuild_test_sb_md_interleaved 00:16:50.458 ************************************ 00:16:50.458 23:34:30 bdev_raid -- bdev/bdev_raid.sh@1015 -- # trap - EXIT 00:16:50.458 23:34:30 bdev_raid -- bdev/bdev_raid.sh@1016 -- # cleanup 00:16:50.458 23:34:30 bdev_raid -- bdev/bdev_raid.sh@56 -- # '[' -n 99389 ']' 00:16:50.458 23:34:30 bdev_raid -- bdev/bdev_raid.sh@56 -- # ps -p 99389 00:16:50.458 23:34:30 bdev_raid -- bdev/bdev_raid.sh@60 -- # rm -rf /raidtest 00:16:50.458 00:16:50.458 real 10m0.517s 00:16:50.458 user 14m5.539s 00:16:50.458 sys 1m52.968s 00:16:50.458 23:34:30 bdev_raid -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:50.458 23:34:30 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:50.458 ************************************ 00:16:50.458 END TEST bdev_raid 00:16:50.458 ************************************ 00:16:50.458 23:34:30 -- spdk/autotest.sh@190 -- # run_test spdkcli_raid /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:16:50.458 23:34:30 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:16:50.458 23:34:30 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:50.458 23:34:30 -- common/autotest_common.sh@10 -- # set +x 00:16:50.458 ************************************ 00:16:50.458 START TEST spdkcli_raid 00:16:50.458 ************************************ 00:16:50.458 23:34:30 spdkcli_raid -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:16:50.719 * Looking for test storage... 00:16:50.719 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:16:50.719 23:34:30 spdkcli_raid -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:16:50.719 23:34:30 spdkcli_raid -- common/autotest_common.sh@1681 -- # lcov --version 00:16:50.719 23:34:30 spdkcli_raid -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:16:50.719 23:34:30 spdkcli_raid -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:16:50.719 23:34:30 spdkcli_raid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:50.719 23:34:30 spdkcli_raid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:50.719 23:34:30 spdkcli_raid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:50.719 23:34:30 spdkcli_raid -- scripts/common.sh@336 -- # IFS=.-: 00:16:50.719 23:34:30 spdkcli_raid -- scripts/common.sh@336 -- # read -ra ver1 00:16:50.719 23:34:30 spdkcli_raid -- scripts/common.sh@337 -- # IFS=.-: 00:16:50.719 23:34:30 spdkcli_raid -- scripts/common.sh@337 -- # read -ra ver2 00:16:50.719 23:34:30 spdkcli_raid -- scripts/common.sh@338 -- # local 'op=<' 00:16:50.719 23:34:30 spdkcli_raid -- scripts/common.sh@340 -- # ver1_l=2 00:16:50.719 23:34:30 spdkcli_raid -- scripts/common.sh@341 -- # ver2_l=1 00:16:50.719 23:34:30 spdkcli_raid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:50.719 23:34:30 spdkcli_raid -- scripts/common.sh@344 -- # case "$op" in 00:16:50.719 23:34:30 spdkcli_raid -- scripts/common.sh@345 -- # : 1 00:16:50.719 23:34:30 spdkcli_raid -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:50.719 23:34:30 spdkcli_raid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:50.719 23:34:30 spdkcli_raid -- scripts/common.sh@365 -- # decimal 1 00:16:50.719 23:34:30 spdkcli_raid -- scripts/common.sh@353 -- # local d=1 00:16:50.719 23:34:30 spdkcli_raid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:50.719 23:34:30 spdkcli_raid -- scripts/common.sh@355 -- # echo 1 00:16:50.719 23:34:30 spdkcli_raid -- scripts/common.sh@365 -- # ver1[v]=1 00:16:50.719 23:34:30 spdkcli_raid -- scripts/common.sh@366 -- # decimal 2 00:16:50.719 23:34:30 spdkcli_raid -- scripts/common.sh@353 -- # local d=2 00:16:50.719 23:34:30 spdkcli_raid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:50.719 23:34:30 spdkcli_raid -- scripts/common.sh@355 -- # echo 2 00:16:50.719 23:34:30 spdkcli_raid -- scripts/common.sh@366 -- # ver2[v]=2 00:16:50.719 23:34:30 spdkcli_raid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:50.719 23:34:30 spdkcli_raid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:50.719 23:34:30 spdkcli_raid -- scripts/common.sh@368 -- # return 0 00:16:50.719 23:34:30 spdkcli_raid -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:50.719 23:34:30 spdkcli_raid -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:16:50.719 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:50.719 --rc genhtml_branch_coverage=1 00:16:50.719 --rc genhtml_function_coverage=1 00:16:50.719 --rc genhtml_legend=1 00:16:50.719 --rc geninfo_all_blocks=1 00:16:50.719 --rc geninfo_unexecuted_blocks=1 00:16:50.719 00:16:50.719 ' 00:16:50.719 23:34:30 spdkcli_raid -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:16:50.719 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:50.719 --rc genhtml_branch_coverage=1 00:16:50.719 --rc genhtml_function_coverage=1 00:16:50.719 --rc genhtml_legend=1 00:16:50.719 --rc geninfo_all_blocks=1 00:16:50.719 --rc geninfo_unexecuted_blocks=1 00:16:50.719 00:16:50.719 ' 00:16:50.719 23:34:30 spdkcli_raid -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:16:50.719 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:50.719 --rc genhtml_branch_coverage=1 00:16:50.719 --rc genhtml_function_coverage=1 00:16:50.719 --rc genhtml_legend=1 00:16:50.719 --rc geninfo_all_blocks=1 00:16:50.719 --rc geninfo_unexecuted_blocks=1 00:16:50.719 00:16:50.719 ' 00:16:50.719 23:34:30 spdkcli_raid -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:16:50.719 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:50.719 --rc genhtml_branch_coverage=1 00:16:50.719 --rc genhtml_function_coverage=1 00:16:50.719 --rc genhtml_legend=1 00:16:50.719 --rc geninfo_all_blocks=1 00:16:50.719 --rc geninfo_unexecuted_blocks=1 00:16:50.719 00:16:50.719 ' 00:16:50.719 23:34:30 spdkcli_raid -- spdkcli/raid.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:16:50.719 23:34:30 spdkcli_raid -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:16:50.719 23:34:30 spdkcli_raid -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:16:50.719 23:34:30 spdkcli_raid -- spdkcli/raid.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:16:50.719 23:34:30 spdkcli_raid -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:16:50.719 23:34:30 spdkcli_raid -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:16:50.719 23:34:30 spdkcli_raid -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:16:50.719 23:34:30 spdkcli_raid -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:16:50.719 23:34:30 spdkcli_raid -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:16:50.719 23:34:30 spdkcli_raid -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:16:50.719 23:34:30 spdkcli_raid -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:16:50.719 23:34:30 spdkcli_raid -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:16:50.719 23:34:30 spdkcli_raid -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:16:50.719 23:34:30 spdkcli_raid -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:16:50.719 23:34:30 spdkcli_raid -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:16:50.719 23:34:30 spdkcli_raid -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:16:50.719 23:34:30 spdkcli_raid -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:16:50.719 23:34:30 spdkcli_raid -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:16:50.719 23:34:30 spdkcli_raid -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:16:50.719 23:34:30 spdkcli_raid -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:16:50.719 23:34:30 spdkcli_raid -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:16:50.719 23:34:30 spdkcli_raid -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:16:50.719 23:34:30 spdkcli_raid -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:16:50.719 23:34:30 spdkcli_raid -- spdkcli/raid.sh@12 -- # MATCH_FILE=spdkcli_raid.test 00:16:50.719 23:34:30 spdkcli_raid -- spdkcli/raid.sh@13 -- # SPDKCLI_BRANCH=/bdevs 00:16:50.719 23:34:30 spdkcli_raid -- spdkcli/raid.sh@14 -- # dirname /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:16:50.719 23:34:30 spdkcli_raid -- spdkcli/raid.sh@14 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/spdkcli 00:16:50.719 23:34:30 spdkcli_raid -- spdkcli/raid.sh@14 -- # testdir=/home/vagrant/spdk_repo/spdk/test/spdkcli 00:16:50.719 23:34:30 spdkcli_raid -- spdkcli/raid.sh@15 -- # . /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:16:50.719 23:34:30 spdkcli_raid -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:16:50.719 23:34:30 spdkcli_raid -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:16:50.719 23:34:30 spdkcli_raid -- spdkcli/raid.sh@17 -- # trap cleanup EXIT 00:16:50.719 23:34:30 spdkcli_raid -- spdkcli/raid.sh@19 -- # timing_enter run_spdk_tgt 00:16:50.719 23:34:30 spdkcli_raid -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:50.719 23:34:30 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:16:50.719 23:34:30 spdkcli_raid -- spdkcli/raid.sh@20 -- # run_spdk_tgt 00:16:50.719 23:34:30 spdkcli_raid -- spdkcli/common.sh@27 -- # spdk_tgt_pid=100060 00:16:50.719 23:34:30 spdkcli_raid -- spdkcli/common.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:16:50.719 23:34:30 spdkcli_raid -- spdkcli/common.sh@28 -- # waitforlisten 100060 00:16:50.719 23:34:30 spdkcli_raid -- common/autotest_common.sh@831 -- # '[' -z 100060 ']' 00:16:50.719 23:34:30 spdkcli_raid -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:50.719 23:34:30 spdkcli_raid -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:50.719 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:50.719 23:34:30 spdkcli_raid -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:50.719 23:34:30 spdkcli_raid -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:50.719 23:34:30 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:16:50.979 [2024-09-30 23:34:30.598535] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:16:50.979 [2024-09-30 23:34:30.598674] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100060 ] 00:16:50.979 [2024-09-30 23:34:30.749018] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:16:50.979 [2024-09-30 23:34:30.819964] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:16:50.979 [2024-09-30 23:34:30.819995] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:16:51.548 23:34:31 spdkcli_raid -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:51.806 23:34:31 spdkcli_raid -- common/autotest_common.sh@864 -- # return 0 00:16:51.806 23:34:31 spdkcli_raid -- spdkcli/raid.sh@21 -- # timing_exit run_spdk_tgt 00:16:51.806 23:34:31 spdkcli_raid -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:51.806 23:34:31 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:16:51.806 23:34:31 spdkcli_raid -- spdkcli/raid.sh@23 -- # timing_enter spdkcli_create_malloc 00:16:51.806 23:34:31 spdkcli_raid -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:51.806 23:34:31 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:16:51.806 23:34:31 spdkcli_raid -- spdkcli/raid.sh@26 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 8 512 Malloc1'\'' '\''Malloc1'\'' True 00:16:51.806 '\''/bdevs/malloc create 8 512 Malloc2'\'' '\''Malloc2'\'' True 00:16:51.806 ' 00:16:53.187 Executing command: ['/bdevs/malloc create 8 512 Malloc1', 'Malloc1', True] 00:16:53.187 Executing command: ['/bdevs/malloc create 8 512 Malloc2', 'Malloc2', True] 00:16:53.447 23:34:33 spdkcli_raid -- spdkcli/raid.sh@27 -- # timing_exit spdkcli_create_malloc 00:16:53.447 23:34:33 spdkcli_raid -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:53.447 23:34:33 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:16:53.447 23:34:33 spdkcli_raid -- spdkcli/raid.sh@29 -- # timing_enter spdkcli_create_raid 00:16:53.447 23:34:33 spdkcli_raid -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:53.447 23:34:33 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:16:53.447 23:34:33 spdkcli_raid -- spdkcli/raid.sh@31 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/raid_volume create testraid 0 "Malloc1 Malloc2" 4'\'' '\''testraid'\'' True 00:16:53.447 ' 00:16:54.386 Executing command: ['/bdevs/raid_volume create testraid 0 "Malloc1 Malloc2" 4', 'testraid', True] 00:16:54.645 23:34:34 spdkcli_raid -- spdkcli/raid.sh@32 -- # timing_exit spdkcli_create_raid 00:16:54.645 23:34:34 spdkcli_raid -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:54.645 23:34:34 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:16:54.645 23:34:34 spdkcli_raid -- spdkcli/raid.sh@34 -- # timing_enter spdkcli_check_match 00:16:54.646 23:34:34 spdkcli_raid -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:54.646 23:34:34 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:16:54.646 23:34:34 spdkcli_raid -- spdkcli/raid.sh@35 -- # check_match 00:16:54.646 23:34:34 spdkcli_raid -- spdkcli/common.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/spdkcli.py ll /bdevs 00:16:55.214 23:34:34 spdkcli_raid -- spdkcli/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/test/app/match/match /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_raid.test.match 00:16:55.214 23:34:34 spdkcli_raid -- spdkcli/common.sh@46 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_raid.test 00:16:55.214 23:34:34 spdkcli_raid -- spdkcli/raid.sh@36 -- # timing_exit spdkcli_check_match 00:16:55.214 23:34:34 spdkcli_raid -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:55.214 23:34:34 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:16:55.214 23:34:34 spdkcli_raid -- spdkcli/raid.sh@38 -- # timing_enter spdkcli_delete_raid 00:16:55.214 23:34:34 spdkcli_raid -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:55.214 23:34:34 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:16:55.215 23:34:34 spdkcli_raid -- spdkcli/raid.sh@40 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/raid_volume delete testraid'\'' '\'''\'' True 00:16:55.215 ' 00:16:56.153 Executing command: ['/bdevs/raid_volume delete testraid', '', True] 00:16:56.153 23:34:35 spdkcli_raid -- spdkcli/raid.sh@41 -- # timing_exit spdkcli_delete_raid 00:16:56.153 23:34:35 spdkcli_raid -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:56.153 23:34:35 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:16:56.412 23:34:36 spdkcli_raid -- spdkcli/raid.sh@43 -- # timing_enter spdkcli_delete_malloc 00:16:56.412 23:34:36 spdkcli_raid -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:56.412 23:34:36 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:16:56.412 23:34:36 spdkcli_raid -- spdkcli/raid.sh@46 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc delete Malloc1'\'' '\'''\'' True 00:16:56.412 '\''/bdevs/malloc delete Malloc2'\'' '\'''\'' True 00:16:56.412 ' 00:16:57.791 Executing command: ['/bdevs/malloc delete Malloc1', '', True] 00:16:57.791 Executing command: ['/bdevs/malloc delete Malloc2', '', True] 00:16:57.791 23:34:37 spdkcli_raid -- spdkcli/raid.sh@47 -- # timing_exit spdkcli_delete_malloc 00:16:57.791 23:34:37 spdkcli_raid -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:57.791 23:34:37 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:16:57.791 23:34:37 spdkcli_raid -- spdkcli/raid.sh@49 -- # killprocess 100060 00:16:57.791 23:34:37 spdkcli_raid -- common/autotest_common.sh@950 -- # '[' -z 100060 ']' 00:16:57.791 23:34:37 spdkcli_raid -- common/autotest_common.sh@954 -- # kill -0 100060 00:16:57.791 23:34:37 spdkcli_raid -- common/autotest_common.sh@955 -- # uname 00:16:57.791 23:34:37 spdkcli_raid -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:57.791 23:34:37 spdkcli_raid -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 100060 00:16:57.791 23:34:37 spdkcli_raid -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:57.791 23:34:37 spdkcli_raid -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:57.791 23:34:37 spdkcli_raid -- common/autotest_common.sh@968 -- # echo 'killing process with pid 100060' 00:16:57.791 killing process with pid 100060 00:16:57.791 23:34:37 spdkcli_raid -- common/autotest_common.sh@969 -- # kill 100060 00:16:57.791 23:34:37 spdkcli_raid -- common/autotest_common.sh@974 -- # wait 100060 00:16:58.730 23:34:38 spdkcli_raid -- spdkcli/raid.sh@1 -- # cleanup 00:16:58.730 23:34:38 spdkcli_raid -- spdkcli/common.sh@10 -- # '[' -n 100060 ']' 00:16:58.730 23:34:38 spdkcli_raid -- spdkcli/common.sh@11 -- # killprocess 100060 00:16:58.730 23:34:38 spdkcli_raid -- common/autotest_common.sh@950 -- # '[' -z 100060 ']' 00:16:58.730 23:34:38 spdkcli_raid -- common/autotest_common.sh@954 -- # kill -0 100060 00:16:58.730 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (100060) - No such process 00:16:58.730 Process with pid 100060 is not found 00:16:58.730 23:34:38 spdkcli_raid -- common/autotest_common.sh@977 -- # echo 'Process with pid 100060 is not found' 00:16:58.730 23:34:38 spdkcli_raid -- spdkcli/common.sh@13 -- # '[' -n '' ']' 00:16:58.730 23:34:38 spdkcli_raid -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:16:58.730 23:34:38 spdkcli_raid -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:16:58.730 23:34:38 spdkcli_raid -- spdkcli/common.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_raid.test /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:16:58.730 00:16:58.730 real 0m8.001s 00:16:58.730 user 0m16.587s 00:16:58.730 sys 0m1.273s 00:16:58.730 23:34:38 spdkcli_raid -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:58.730 23:34:38 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:16:58.730 ************************************ 00:16:58.730 END TEST spdkcli_raid 00:16:58.730 ************************************ 00:16:58.730 23:34:38 -- spdk/autotest.sh@191 -- # run_test blockdev_raid5f /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh raid5f 00:16:58.730 23:34:38 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:16:58.730 23:34:38 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:58.730 23:34:38 -- common/autotest_common.sh@10 -- # set +x 00:16:58.730 ************************************ 00:16:58.730 START TEST blockdev_raid5f 00:16:58.730 ************************************ 00:16:58.730 23:34:38 blockdev_raid5f -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh raid5f 00:16:58.730 * Looking for test storage... 00:16:58.730 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:16:58.730 23:34:38 blockdev_raid5f -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:16:58.730 23:34:38 blockdev_raid5f -- common/autotest_common.sh@1681 -- # lcov --version 00:16:58.730 23:34:38 blockdev_raid5f -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:16:58.730 23:34:38 blockdev_raid5f -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:16:58.730 23:34:38 blockdev_raid5f -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:58.730 23:34:38 blockdev_raid5f -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:58.730 23:34:38 blockdev_raid5f -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:58.730 23:34:38 blockdev_raid5f -- scripts/common.sh@336 -- # IFS=.-: 00:16:58.730 23:34:38 blockdev_raid5f -- scripts/common.sh@336 -- # read -ra ver1 00:16:58.730 23:34:38 blockdev_raid5f -- scripts/common.sh@337 -- # IFS=.-: 00:16:58.730 23:34:38 blockdev_raid5f -- scripts/common.sh@337 -- # read -ra ver2 00:16:58.730 23:34:38 blockdev_raid5f -- scripts/common.sh@338 -- # local 'op=<' 00:16:58.730 23:34:38 blockdev_raid5f -- scripts/common.sh@340 -- # ver1_l=2 00:16:58.730 23:34:38 blockdev_raid5f -- scripts/common.sh@341 -- # ver2_l=1 00:16:58.730 23:34:38 blockdev_raid5f -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:58.731 23:34:38 blockdev_raid5f -- scripts/common.sh@344 -- # case "$op" in 00:16:58.731 23:34:38 blockdev_raid5f -- scripts/common.sh@345 -- # : 1 00:16:58.731 23:34:38 blockdev_raid5f -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:58.731 23:34:38 blockdev_raid5f -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:58.731 23:34:38 blockdev_raid5f -- scripts/common.sh@365 -- # decimal 1 00:16:58.731 23:34:38 blockdev_raid5f -- scripts/common.sh@353 -- # local d=1 00:16:58.731 23:34:38 blockdev_raid5f -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:58.731 23:34:38 blockdev_raid5f -- scripts/common.sh@355 -- # echo 1 00:16:58.731 23:34:38 blockdev_raid5f -- scripts/common.sh@365 -- # ver1[v]=1 00:16:58.731 23:34:38 blockdev_raid5f -- scripts/common.sh@366 -- # decimal 2 00:16:58.731 23:34:38 blockdev_raid5f -- scripts/common.sh@353 -- # local d=2 00:16:58.731 23:34:38 blockdev_raid5f -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:58.731 23:34:38 blockdev_raid5f -- scripts/common.sh@355 -- # echo 2 00:16:58.731 23:34:38 blockdev_raid5f -- scripts/common.sh@366 -- # ver2[v]=2 00:16:58.731 23:34:38 blockdev_raid5f -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:58.731 23:34:38 blockdev_raid5f -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:58.731 23:34:38 blockdev_raid5f -- scripts/common.sh@368 -- # return 0 00:16:58.731 23:34:38 blockdev_raid5f -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:58.731 23:34:38 blockdev_raid5f -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:16:58.731 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:58.731 --rc genhtml_branch_coverage=1 00:16:58.731 --rc genhtml_function_coverage=1 00:16:58.731 --rc genhtml_legend=1 00:16:58.731 --rc geninfo_all_blocks=1 00:16:58.731 --rc geninfo_unexecuted_blocks=1 00:16:58.731 00:16:58.731 ' 00:16:58.731 23:34:38 blockdev_raid5f -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:16:58.731 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:58.731 --rc genhtml_branch_coverage=1 00:16:58.731 --rc genhtml_function_coverage=1 00:16:58.731 --rc genhtml_legend=1 00:16:58.731 --rc geninfo_all_blocks=1 00:16:58.731 --rc geninfo_unexecuted_blocks=1 00:16:58.731 00:16:58.731 ' 00:16:58.731 23:34:38 blockdev_raid5f -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:16:58.731 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:58.731 --rc genhtml_branch_coverage=1 00:16:58.731 --rc genhtml_function_coverage=1 00:16:58.731 --rc genhtml_legend=1 00:16:58.731 --rc geninfo_all_blocks=1 00:16:58.731 --rc geninfo_unexecuted_blocks=1 00:16:58.731 00:16:58.731 ' 00:16:58.731 23:34:38 blockdev_raid5f -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:16:58.731 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:58.731 --rc genhtml_branch_coverage=1 00:16:58.731 --rc genhtml_function_coverage=1 00:16:58.731 --rc genhtml_legend=1 00:16:58.731 --rc geninfo_all_blocks=1 00:16:58.731 --rc geninfo_unexecuted_blocks=1 00:16:58.731 00:16:58.731 ' 00:16:58.731 23:34:38 blockdev_raid5f -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:16:58.731 23:34:38 blockdev_raid5f -- bdev/nbd_common.sh@6 -- # set -e 00:16:58.731 23:34:38 blockdev_raid5f -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:16:58.731 23:34:38 blockdev_raid5f -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:16:58.731 23:34:38 blockdev_raid5f -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:16:58.731 23:34:38 blockdev_raid5f -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:16:58.731 23:34:38 blockdev_raid5f -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:16:58.731 23:34:38 blockdev_raid5f -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:16:58.731 23:34:38 blockdev_raid5f -- bdev/blockdev.sh@20 -- # : 00:16:58.731 23:34:38 blockdev_raid5f -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:16:58.731 23:34:38 blockdev_raid5f -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:16:58.731 23:34:38 blockdev_raid5f -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:16:58.731 23:34:38 blockdev_raid5f -- bdev/blockdev.sh@673 -- # uname -s 00:16:58.731 23:34:38 blockdev_raid5f -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:16:58.731 23:34:38 blockdev_raid5f -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:16:58.731 23:34:38 blockdev_raid5f -- bdev/blockdev.sh@681 -- # test_type=raid5f 00:16:58.731 23:34:38 blockdev_raid5f -- bdev/blockdev.sh@682 -- # crypto_device= 00:16:58.731 23:34:38 blockdev_raid5f -- bdev/blockdev.sh@683 -- # dek= 00:16:58.731 23:34:38 blockdev_raid5f -- bdev/blockdev.sh@684 -- # env_ctx= 00:16:58.731 23:34:38 blockdev_raid5f -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:16:58.731 23:34:38 blockdev_raid5f -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:16:58.731 23:34:38 blockdev_raid5f -- bdev/blockdev.sh@689 -- # [[ raid5f == bdev ]] 00:16:58.731 23:34:38 blockdev_raid5f -- bdev/blockdev.sh@689 -- # [[ raid5f == crypto_* ]] 00:16:58.731 23:34:38 blockdev_raid5f -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:16:58.731 23:34:38 blockdev_raid5f -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=100325 00:16:58.731 23:34:38 blockdev_raid5f -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:16:58.731 23:34:38 blockdev_raid5f -- bdev/blockdev.sh@49 -- # waitforlisten 100325 00:16:58.731 23:34:38 blockdev_raid5f -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:16:58.731 23:34:38 blockdev_raid5f -- common/autotest_common.sh@831 -- # '[' -z 100325 ']' 00:16:58.731 23:34:38 blockdev_raid5f -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:58.731 23:34:38 blockdev_raid5f -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:58.731 23:34:38 blockdev_raid5f -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:58.731 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:58.731 23:34:38 blockdev_raid5f -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:58.731 23:34:38 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:16:58.990 [2024-09-30 23:34:38.651594] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:16:58.990 [2024-09-30 23:34:38.651867] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100325 ] 00:16:58.990 [2024-09-30 23:34:38.812304] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:59.250 [2024-09-30 23:34:38.886054] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:16:59.818 23:34:39 blockdev_raid5f -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:59.818 23:34:39 blockdev_raid5f -- common/autotest_common.sh@864 -- # return 0 00:16:59.818 23:34:39 blockdev_raid5f -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:16:59.818 23:34:39 blockdev_raid5f -- bdev/blockdev.sh@725 -- # setup_raid5f_conf 00:16:59.818 23:34:39 blockdev_raid5f -- bdev/blockdev.sh@279 -- # rpc_cmd 00:16:59.818 23:34:39 blockdev_raid5f -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:59.818 23:34:39 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:16:59.818 Malloc0 00:16:59.818 Malloc1 00:16:59.818 Malloc2 00:16:59.818 23:34:39 blockdev_raid5f -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:59.818 23:34:39 blockdev_raid5f -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:16:59.818 23:34:39 blockdev_raid5f -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:59.818 23:34:39 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:16:59.818 23:34:39 blockdev_raid5f -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:59.818 23:34:39 blockdev_raid5f -- bdev/blockdev.sh@739 -- # cat 00:16:59.818 23:34:39 blockdev_raid5f -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:16:59.818 23:34:39 blockdev_raid5f -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:59.818 23:34:39 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:16:59.818 23:34:39 blockdev_raid5f -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:59.818 23:34:39 blockdev_raid5f -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:16:59.818 23:34:39 blockdev_raid5f -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:59.818 23:34:39 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:16:59.818 23:34:39 blockdev_raid5f -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:59.818 23:34:39 blockdev_raid5f -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:16:59.818 23:34:39 blockdev_raid5f -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:59.818 23:34:39 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:16:59.818 23:34:39 blockdev_raid5f -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:59.818 23:34:39 blockdev_raid5f -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:16:59.818 23:34:39 blockdev_raid5f -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:16:59.818 23:34:39 blockdev_raid5f -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:59.818 23:34:39 blockdev_raid5f -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:16:59.818 23:34:39 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:16:59.818 23:34:39 blockdev_raid5f -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:59.818 23:34:39 blockdev_raid5f -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:16:59.818 23:34:39 blockdev_raid5f -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "raid5f",' ' "aliases": [' ' "6ff75ac9-cb57-4364-bc53-716c423b3d14"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "6ff75ac9-cb57-4364-bc53-716c423b3d14",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "raid": {' ' "uuid": "6ff75ac9-cb57-4364-bc53-716c423b3d14",' ' "strip_size_kb": 2,' ' "state": "online",' ' "raid_level": "raid5f",' ' "superblock": false,' ' "num_base_bdevs": 3,' ' "num_base_bdevs_discovered": 3,' ' "num_base_bdevs_operational": 3,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc0",' ' "uuid": "4887e3b3-b72f-45a8-9d5c-95ee9deba456",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc1",' ' "uuid": "5972877b-4ad1-4c75-963c-b22b0751c4a2",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc2",' ' "uuid": "46ad9bdc-255d-4f98-a565-0795cdcdcaa7",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' 00:16:59.818 23:34:39 blockdev_raid5f -- bdev/blockdev.sh@748 -- # jq -r .name 00:17:00.078 23:34:39 blockdev_raid5f -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:17:00.078 23:34:39 blockdev_raid5f -- bdev/blockdev.sh@751 -- # hello_world_bdev=raid5f 00:17:00.078 23:34:39 blockdev_raid5f -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:17:00.078 23:34:39 blockdev_raid5f -- bdev/blockdev.sh@753 -- # killprocess 100325 00:17:00.078 23:34:39 blockdev_raid5f -- common/autotest_common.sh@950 -- # '[' -z 100325 ']' 00:17:00.078 23:34:39 blockdev_raid5f -- common/autotest_common.sh@954 -- # kill -0 100325 00:17:00.078 23:34:39 blockdev_raid5f -- common/autotest_common.sh@955 -- # uname 00:17:00.078 23:34:39 blockdev_raid5f -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:00.078 23:34:39 blockdev_raid5f -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 100325 00:17:00.078 killing process with pid 100325 00:17:00.078 23:34:39 blockdev_raid5f -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:00.078 23:34:39 blockdev_raid5f -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:00.078 23:34:39 blockdev_raid5f -- common/autotest_common.sh@968 -- # echo 'killing process with pid 100325' 00:17:00.078 23:34:39 blockdev_raid5f -- common/autotest_common.sh@969 -- # kill 100325 00:17:00.078 23:34:39 blockdev_raid5f -- common/autotest_common.sh@974 -- # wait 100325 00:17:00.648 23:34:40 blockdev_raid5f -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:17:00.648 23:34:40 blockdev_raid5f -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b raid5f '' 00:17:00.648 23:34:40 blockdev_raid5f -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:17:00.648 23:34:40 blockdev_raid5f -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:00.648 23:34:40 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:17:00.648 ************************************ 00:17:00.648 START TEST bdev_hello_world 00:17:00.648 ************************************ 00:17:00.648 23:34:40 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b raid5f '' 00:17:00.907 [2024-09-30 23:34:40.552112] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:17:00.907 [2024-09-30 23:34:40.552218] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100360 ] 00:17:00.907 [2024-09-30 23:34:40.710728] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:01.167 [2024-09-30 23:34:40.789052] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:17:01.428 [2024-09-30 23:34:41.051455] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:17:01.428 [2024-09-30 23:34:41.051584] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev raid5f 00:17:01.428 [2024-09-30 23:34:41.051610] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:17:01.428 [2024-09-30 23:34:41.052014] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:17:01.428 [2024-09-30 23:34:41.052163] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:17:01.428 [2024-09-30 23:34:41.052180] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:17:01.428 [2024-09-30 23:34:41.052233] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:17:01.428 00:17:01.428 [2024-09-30 23:34:41.052257] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:17:01.689 00:17:01.689 real 0m0.989s 00:17:01.689 user 0m0.553s 00:17:01.689 sys 0m0.318s 00:17:01.689 ************************************ 00:17:01.689 END TEST bdev_hello_world 00:17:01.689 ************************************ 00:17:01.689 23:34:41 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:01.689 23:34:41 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:17:01.689 23:34:41 blockdev_raid5f -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:17:01.689 23:34:41 blockdev_raid5f -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:17:01.689 23:34:41 blockdev_raid5f -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:01.689 23:34:41 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:17:01.689 ************************************ 00:17:01.689 START TEST bdev_bounds 00:17:01.689 ************************************ 00:17:01.948 23:34:41 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@1125 -- # bdev_bounds '' 00:17:01.948 23:34:41 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=100391 00:17:01.948 23:34:41 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:17:01.948 23:34:41 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:17:01.948 Process bdevio pid: 100391 00:17:01.948 23:34:41 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 100391' 00:17:01.948 23:34:41 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 100391 00:17:01.948 23:34:41 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@831 -- # '[' -z 100391 ']' 00:17:01.948 23:34:41 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:01.948 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:01.948 23:34:41 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:01.948 23:34:41 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:01.948 23:34:41 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:01.948 23:34:41 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:17:01.948 [2024-09-30 23:34:41.626711] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:17:01.948 [2024-09-30 23:34:41.626884] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100391 ] 00:17:01.948 [2024-09-30 23:34:41.791474] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:02.208 [2024-09-30 23:34:41.866218] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:17:02.208 [2024-09-30 23:34:41.866396] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:17:02.208 [2024-09-30 23:34:41.866499] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:17:02.777 23:34:42 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:02.777 23:34:42 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@864 -- # return 0 00:17:02.777 23:34:42 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:17:02.777 I/O targets: 00:17:02.777 raid5f: 131072 blocks of 512 bytes (64 MiB) 00:17:02.777 00:17:02.777 00:17:02.777 CUnit - A unit testing framework for C - Version 2.1-3 00:17:02.777 http://cunit.sourceforge.net/ 00:17:02.777 00:17:02.777 00:17:02.777 Suite: bdevio tests on: raid5f 00:17:02.777 Test: blockdev write read block ...passed 00:17:02.777 Test: blockdev write zeroes read block ...passed 00:17:02.777 Test: blockdev write zeroes read no split ...passed 00:17:02.777 Test: blockdev write zeroes read split ...passed 00:17:03.037 Test: blockdev write zeroes read split partial ...passed 00:17:03.037 Test: blockdev reset ...passed 00:17:03.037 Test: blockdev write read 8 blocks ...passed 00:17:03.037 Test: blockdev write read size > 128k ...passed 00:17:03.037 Test: blockdev write read invalid size ...passed 00:17:03.037 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:17:03.037 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:17:03.037 Test: blockdev write read max offset ...passed 00:17:03.037 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:17:03.037 Test: blockdev writev readv 8 blocks ...passed 00:17:03.037 Test: blockdev writev readv 30 x 1block ...passed 00:17:03.037 Test: blockdev writev readv block ...passed 00:17:03.037 Test: blockdev writev readv size > 128k ...passed 00:17:03.037 Test: blockdev writev readv size > 128k in two iovs ...passed 00:17:03.037 Test: blockdev comparev and writev ...passed 00:17:03.037 Test: blockdev nvme passthru rw ...passed 00:17:03.037 Test: blockdev nvme passthru vendor specific ...passed 00:17:03.037 Test: blockdev nvme admin passthru ...passed 00:17:03.037 Test: blockdev copy ...passed 00:17:03.037 00:17:03.037 Run Summary: Type Total Ran Passed Failed Inactive 00:17:03.037 suites 1 1 n/a 0 0 00:17:03.037 tests 23 23 23 0 0 00:17:03.037 asserts 130 130 130 0 n/a 00:17:03.037 00:17:03.037 Elapsed time = 0.339 seconds 00:17:03.037 0 00:17:03.037 23:34:42 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 100391 00:17:03.037 23:34:42 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@950 -- # '[' -z 100391 ']' 00:17:03.037 23:34:42 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@954 -- # kill -0 100391 00:17:03.037 23:34:42 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@955 -- # uname 00:17:03.037 23:34:42 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:03.037 23:34:42 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 100391 00:17:03.037 23:34:42 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:03.037 23:34:42 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:03.037 23:34:42 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@968 -- # echo 'killing process with pid 100391' 00:17:03.037 killing process with pid 100391 00:17:03.037 23:34:42 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@969 -- # kill 100391 00:17:03.037 23:34:42 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@974 -- # wait 100391 00:17:03.297 23:34:43 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:17:03.297 00:17:03.297 real 0m1.605s 00:17:03.297 user 0m3.597s 00:17:03.297 sys 0m0.452s 00:17:03.297 23:34:43 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:03.297 23:34:43 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:17:03.297 ************************************ 00:17:03.297 END TEST bdev_bounds 00:17:03.297 ************************************ 00:17:03.557 23:34:43 blockdev_raid5f -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json raid5f '' 00:17:03.557 23:34:43 blockdev_raid5f -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:17:03.557 23:34:43 blockdev_raid5f -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:03.557 23:34:43 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:17:03.557 ************************************ 00:17:03.557 START TEST bdev_nbd 00:17:03.557 ************************************ 00:17:03.557 23:34:43 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@1125 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json raid5f '' 00:17:03.557 23:34:43 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:17:03.557 23:34:43 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:17:03.557 23:34:43 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:03.557 23:34:43 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:17:03.557 23:34:43 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('raid5f') 00:17:03.557 23:34:43 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:17:03.557 23:34:43 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=1 00:17:03.557 23:34:43 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:17:03.557 23:34:43 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:17:03.557 23:34:43 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:17:03.557 23:34:43 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=1 00:17:03.557 23:34:43 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0') 00:17:03.557 23:34:43 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:17:03.557 23:34:43 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('raid5f') 00:17:03.557 23:34:43 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:17:03.557 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:17:03.557 23:34:43 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=100445 00:17:03.557 23:34:43 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:17:03.557 23:34:43 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:17:03.557 23:34:43 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 100445 /var/tmp/spdk-nbd.sock 00:17:03.557 23:34:43 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@831 -- # '[' -z 100445 ']' 00:17:03.557 23:34:43 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:17:03.557 23:34:43 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:03.557 23:34:43 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:17:03.557 23:34:43 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:03.557 23:34:43 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:17:03.557 [2024-09-30 23:34:43.325406] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:17:03.557 [2024-09-30 23:34:43.325658] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:03.817 [2024-09-30 23:34:43.488987] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:03.817 [2024-09-30 23:34:43.561164] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:17:04.386 23:34:44 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:04.386 23:34:44 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@864 -- # return 0 00:17:04.386 23:34:44 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock raid5f 00:17:04.386 23:34:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:04.386 23:34:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('raid5f') 00:17:04.386 23:34:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:17:04.386 23:34:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock raid5f 00:17:04.386 23:34:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:04.386 23:34:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('raid5f') 00:17:04.386 23:34:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:17:04.386 23:34:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:17:04.386 23:34:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:17:04.386 23:34:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:17:04.386 23:34:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:17:04.386 23:34:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid5f 00:17:04.645 23:34:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:17:04.645 23:34:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:17:04.645 23:34:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:17:04.645 23:34:44 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:17:04.645 23:34:44 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:17:04.645 23:34:44 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:17:04.645 23:34:44 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:17:04.645 23:34:44 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:17:04.645 23:34:44 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:17:04.645 23:34:44 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:17:04.645 23:34:44 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:17:04.645 23:34:44 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:04.645 1+0 records in 00:17:04.645 1+0 records out 00:17:04.645 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00059506 s, 6.9 MB/s 00:17:04.645 23:34:44 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:04.645 23:34:44 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:17:04.645 23:34:44 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:04.645 23:34:44 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:17:04.645 23:34:44 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:17:04.645 23:34:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:17:04.645 23:34:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:17:04.645 23:34:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:17:04.905 23:34:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:17:04.905 { 00:17:04.905 "nbd_device": "/dev/nbd0", 00:17:04.905 "bdev_name": "raid5f" 00:17:04.905 } 00:17:04.905 ]' 00:17:04.905 23:34:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:17:04.905 23:34:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:17:04.905 { 00:17:04.905 "nbd_device": "/dev/nbd0", 00:17:04.905 "bdev_name": "raid5f" 00:17:04.905 } 00:17:04.905 ]' 00:17:04.905 23:34:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:17:04.905 23:34:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:17:04.905 23:34:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:04.905 23:34:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:17:04.905 23:34:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:04.905 23:34:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:17:04.905 23:34:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:04.905 23:34:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:17:05.163 23:34:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:05.163 23:34:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:05.163 23:34:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:05.163 23:34:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:05.163 23:34:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:05.163 23:34:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:05.163 23:34:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:17:05.163 23:34:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:17:05.164 23:34:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:17:05.164 23:34:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:05.164 23:34:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:17:05.423 23:34:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:17:05.423 23:34:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:17:05.423 23:34:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:17:05.423 23:34:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:17:05.423 23:34:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:17:05.423 23:34:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:17:05.423 23:34:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:17:05.423 23:34:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:17:05.423 23:34:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:17:05.423 23:34:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:17:05.423 23:34:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:17:05.423 23:34:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:17:05.423 23:34:45 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock raid5f /dev/nbd0 00:17:05.423 23:34:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:05.423 23:34:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('raid5f') 00:17:05.423 23:34:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:17:05.423 23:34:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0') 00:17:05.423 23:34:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:17:05.423 23:34:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock raid5f /dev/nbd0 00:17:05.423 23:34:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:05.423 23:34:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('raid5f') 00:17:05.423 23:34:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:05.423 23:34:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:17:05.423 23:34:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:05.423 23:34:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:17:05.423 23:34:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:05.423 23:34:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:05.423 23:34:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid5f /dev/nbd0 00:17:05.682 /dev/nbd0 00:17:05.682 23:34:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:05.682 23:34:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:05.682 23:34:45 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:17:05.682 23:34:45 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:17:05.682 23:34:45 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:17:05.682 23:34:45 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:17:05.682 23:34:45 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:17:05.682 23:34:45 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:17:05.682 23:34:45 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:17:05.682 23:34:45 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:17:05.682 23:34:45 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:05.682 1+0 records in 00:17:05.682 1+0 records out 00:17:05.682 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000610368 s, 6.7 MB/s 00:17:05.682 23:34:45 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:05.682 23:34:45 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:17:05.682 23:34:45 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:05.682 23:34:45 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:17:05.682 23:34:45 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:17:05.682 23:34:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:05.682 23:34:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:05.682 23:34:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:17:05.682 23:34:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:05.682 23:34:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:17:05.941 23:34:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:17:05.941 { 00:17:05.941 "nbd_device": "/dev/nbd0", 00:17:05.941 "bdev_name": "raid5f" 00:17:05.941 } 00:17:05.941 ]' 00:17:05.941 23:34:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:17:05.941 23:34:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:17:05.941 { 00:17:05.941 "nbd_device": "/dev/nbd0", 00:17:05.941 "bdev_name": "raid5f" 00:17:05.941 } 00:17:05.941 ]' 00:17:05.941 23:34:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:17:05.941 23:34:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:17:05.941 23:34:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:17:05.941 23:34:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=1 00:17:05.942 23:34:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 1 00:17:05.942 23:34:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=1 00:17:05.942 23:34:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 1 -ne 1 ']' 00:17:05.942 23:34:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify /dev/nbd0 write 00:17:05.942 23:34:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:17:05.942 23:34:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:17:05.942 23:34:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:17:05.942 23:34:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:17:05.942 23:34:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:17:05.942 23:34:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:17:05.942 256+0 records in 00:17:05.942 256+0 records out 00:17:05.942 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0143044 s, 73.3 MB/s 00:17:05.942 23:34:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:17:05.942 23:34:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:17:05.942 256+0 records in 00:17:05.942 256+0 records out 00:17:05.942 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0304781 s, 34.4 MB/s 00:17:05.942 23:34:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify /dev/nbd0 verify 00:17:05.942 23:34:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:17:05.942 23:34:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:17:05.942 23:34:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:17:05.942 23:34:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:17:05.942 23:34:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:17:05.942 23:34:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:17:05.942 23:34:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:17:05.942 23:34:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:17:05.942 23:34:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:17:05.942 23:34:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:17:05.942 23:34:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:05.942 23:34:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:17:05.942 23:34:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:05.942 23:34:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:17:05.942 23:34:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:05.942 23:34:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:17:06.201 23:34:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:06.201 23:34:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:06.201 23:34:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:06.201 23:34:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:06.201 23:34:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:06.201 23:34:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:06.201 23:34:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:17:06.201 23:34:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:17:06.201 23:34:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:17:06.201 23:34:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:06.201 23:34:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:17:06.459 23:34:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:17:06.459 23:34:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:17:06.459 23:34:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:17:06.459 23:34:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:17:06.459 23:34:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:17:06.459 23:34:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:17:06.459 23:34:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:17:06.459 23:34:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:17:06.459 23:34:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:17:06.459 23:34:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:17:06.459 23:34:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:17:06.459 23:34:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:17:06.459 23:34:46 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:17:06.459 23:34:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:06.459 23:34:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:17:06.459 23:34:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:17:06.719 malloc_lvol_verify 00:17:06.719 23:34:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:17:06.978 6d63074f-17b3-4936-8fdc-fa452fd152da 00:17:06.978 23:34:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:17:06.978 5af3abce-ff23-4dbd-b378-c5e3a5a19cae 00:17:06.978 23:34:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:17:07.237 /dev/nbd0 00:17:07.237 23:34:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:17:07.237 23:34:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:17:07.237 23:34:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:17:07.237 23:34:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:17:07.237 23:34:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:17:07.237 mke2fs 1.47.0 (5-Feb-2023) 00:17:07.237 Discarding device blocks: 0/4096 done 00:17:07.237 Creating filesystem with 4096 1k blocks and 1024 inodes 00:17:07.237 00:17:07.237 Allocating group tables: 0/1 done 00:17:07.237 Writing inode tables: 0/1 done 00:17:07.237 Creating journal (1024 blocks): done 00:17:07.237 Writing superblocks and filesystem accounting information: 0/1 done 00:17:07.237 00:17:07.237 23:34:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:17:07.237 23:34:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:07.237 23:34:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:17:07.237 23:34:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:07.237 23:34:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:17:07.237 23:34:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:07.237 23:34:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:17:07.497 23:34:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:07.497 23:34:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:07.497 23:34:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:07.497 23:34:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:07.497 23:34:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:07.497 23:34:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:07.497 23:34:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:17:07.497 23:34:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:17:07.497 23:34:47 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 100445 00:17:07.497 23:34:47 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@950 -- # '[' -z 100445 ']' 00:17:07.497 23:34:47 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@954 -- # kill -0 100445 00:17:07.497 23:34:47 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@955 -- # uname 00:17:07.497 23:34:47 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:07.497 23:34:47 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 100445 00:17:07.497 23:34:47 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:07.497 23:34:47 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:07.497 killing process with pid 100445 00:17:07.497 23:34:47 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@968 -- # echo 'killing process with pid 100445' 00:17:07.497 23:34:47 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@969 -- # kill 100445 00:17:07.497 23:34:47 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@974 -- # wait 100445 00:17:08.067 23:34:47 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:17:08.067 00:17:08.067 real 0m4.450s 00:17:08.067 user 0m6.320s 00:17:08.067 sys 0m1.284s 00:17:08.068 23:34:47 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:08.068 23:34:47 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:17:08.068 ************************************ 00:17:08.068 END TEST bdev_nbd 00:17:08.068 ************************************ 00:17:08.068 23:34:47 blockdev_raid5f -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:17:08.068 23:34:47 blockdev_raid5f -- bdev/blockdev.sh@763 -- # '[' raid5f = nvme ']' 00:17:08.068 23:34:47 blockdev_raid5f -- bdev/blockdev.sh@763 -- # '[' raid5f = gpt ']' 00:17:08.068 23:34:47 blockdev_raid5f -- bdev/blockdev.sh@767 -- # run_test bdev_fio fio_test_suite '' 00:17:08.068 23:34:47 blockdev_raid5f -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:17:08.068 23:34:47 blockdev_raid5f -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:08.068 23:34:47 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:17:08.068 ************************************ 00:17:08.068 START TEST bdev_fio 00:17:08.068 ************************************ 00:17:08.068 23:34:47 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1125 -- # fio_test_suite '' 00:17:08.068 23:34:47 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:17:08.068 23:34:47 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:17:08.068 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:17:08.068 23:34:47 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:17:08.068 23:34:47 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:17:08.068 23:34:47 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:17:08.068 23:34:47 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:17:08.068 23:34:47 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:17:08.068 23:34:47 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:17:08.068 23:34:47 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=verify 00:17:08.068 23:34:47 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type=AIO 00:17:08.068 23:34:47 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:17:08.068 23:34:47 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:17:08.068 23:34:47 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:17:08.068 23:34:47 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z verify ']' 00:17:08.068 23:34:47 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:17:08.068 23:34:47 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1299 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:17:08.068 23:34:47 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:17:08.068 23:34:47 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1313 -- # '[' verify == verify ']' 00:17:08.068 23:34:47 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1314 -- # cat 00:17:08.068 23:34:47 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1323 -- # '[' AIO == AIO ']' 00:17:08.068 23:34:47 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1324 -- # /usr/src/fio/fio --version 00:17:08.068 23:34:47 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1324 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:17:08.068 23:34:47 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1325 -- # echo serialize_overlap=1 00:17:08.068 23:34:47 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:17:08.068 23:34:47 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_raid5f]' 00:17:08.068 23:34:47 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=raid5f 00:17:08.068 23:34:47 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:17:08.068 23:34:47 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:17:08.068 23:34:47 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1101 -- # '[' 11 -le 1 ']' 00:17:08.068 23:34:47 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:08.068 23:34:47 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:17:08.068 ************************************ 00:17:08.068 START TEST bdev_fio_rw_verify 00:17:08.068 ************************************ 00:17:08.068 23:34:47 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1125 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:17:08.068 23:34:47 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:17:08.068 23:34:47 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:17:08.068 23:34:47 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:17:08.068 23:34:47 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # local sanitizers 00:17:08.068 23:34:47 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:08.068 23:34:47 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # shift 00:17:08.068 23:34:47 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local asan_lib= 00:17:08.068 23:34:47 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:17:08.328 23:34:47 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # grep libasan 00:17:08.328 23:34:47 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:08.328 23:34:47 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:17:08.328 23:34:47 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:17:08.328 23:34:47 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:17:08.328 23:34:47 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # break 00:17:08.328 23:34:47 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:17:08.328 23:34:47 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:17:08.328 job_raid5f: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:17:08.328 fio-3.35 00:17:08.328 Starting 1 thread 00:17:20.565 00:17:20.565 job_raid5f: (groupid=0, jobs=1): err= 0: pid=100630: Mon Sep 30 23:34:58 2024 00:17:20.565 read: IOPS=12.7k, BW=49.8MiB/s (52.2MB/s)(498MiB/10001msec) 00:17:20.565 slat (nsec): min=16752, max=57709, avg=18151.39, stdev=1672.84 00:17:20.565 clat (usec): min=11, max=294, avg=126.18, stdev=42.97 00:17:20.565 lat (usec): min=29, max=314, avg=144.33, stdev=43.19 00:17:20.565 clat percentiles (usec): 00:17:20.565 | 50.000th=[ 130], 99.000th=[ 206], 99.900th=[ 239], 99.990th=[ 262], 00:17:20.565 | 99.999th=[ 289] 00:17:20.565 write: IOPS=13.4k, BW=52.2MiB/s (54.8MB/s)(516MiB/9879msec); 0 zone resets 00:17:20.565 slat (usec): min=7, max=253, avg=15.92, stdev= 3.53 00:17:20.565 clat (usec): min=57, max=1432, avg=288.87, stdev=41.47 00:17:20.565 lat (usec): min=72, max=1681, avg=304.79, stdev=42.61 00:17:20.565 clat percentiles (usec): 00:17:20.565 | 50.000th=[ 293], 99.000th=[ 367], 99.900th=[ 594], 99.990th=[ 1221], 00:17:20.565 | 99.999th=[ 1369] 00:17:20.565 bw ( KiB/s): min=50576, max=54992, per=98.73%, avg=52788.21, stdev=1434.99, samples=19 00:17:20.565 iops : min=12644, max=13748, avg=13197.05, stdev=358.75, samples=19 00:17:20.565 lat (usec) : 20=0.01%, 50=0.01%, 100=16.77%, 250=40.67%, 500=42.47% 00:17:20.565 lat (usec) : 750=0.04%, 1000=0.02% 00:17:20.565 lat (msec) : 2=0.01% 00:17:20.565 cpu : usr=98.88%, sys=0.47%, ctx=30, majf=0, minf=13489 00:17:20.565 IO depths : 1=7.6%, 2=19.9%, 4=55.1%, 8=17.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:20.565 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:20.565 complete : 0=0.0%, 4=90.0%, 8=10.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:20.565 issued rwts: total=127499,132054,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:20.565 latency : target=0, window=0, percentile=100.00%, depth=8 00:17:20.565 00:17:20.565 Run status group 0 (all jobs): 00:17:20.565 READ: bw=49.8MiB/s (52.2MB/s), 49.8MiB/s-49.8MiB/s (52.2MB/s-52.2MB/s), io=498MiB (522MB), run=10001-10001msec 00:17:20.565 WRITE: bw=52.2MiB/s (54.8MB/s), 52.2MiB/s-52.2MiB/s (54.8MB/s-54.8MB/s), io=516MiB (541MB), run=9879-9879msec 00:17:20.565 ----------------------------------------------------- 00:17:20.565 Suppressions used: 00:17:20.565 count bytes template 00:17:20.565 1 7 /usr/src/fio/parse.c 00:17:20.565 599 57504 /usr/src/fio/iolog.c 00:17:20.565 1 8 libtcmalloc_minimal.so 00:17:20.565 1 904 libcrypto.so 00:17:20.565 ----------------------------------------------------- 00:17:20.565 00:17:20.565 00:17:20.565 real 0m11.393s 00:17:20.565 user 0m11.610s 00:17:20.565 sys 0m0.669s 00:17:20.565 23:34:59 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:20.565 23:34:59 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:17:20.565 ************************************ 00:17:20.565 END TEST bdev_fio_rw_verify 00:17:20.565 ************************************ 00:17:20.565 23:34:59 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:17:20.565 23:34:59 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:17:20.565 23:34:59 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:17:20.565 23:34:59 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:17:20.565 23:34:59 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=trim 00:17:20.565 23:34:59 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type= 00:17:20.565 23:34:59 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:17:20.565 23:34:59 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:17:20.565 23:34:59 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:17:20.565 23:34:59 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z trim ']' 00:17:20.566 23:34:59 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:17:20.566 23:34:59 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1299 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:17:20.566 23:34:59 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:17:20.566 23:34:59 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1313 -- # '[' trim == verify ']' 00:17:20.566 23:34:59 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1328 -- # '[' trim == trim ']' 00:17:20.566 23:34:59 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1329 -- # echo rw=trimwrite 00:17:20.566 23:34:59 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:17:20.566 23:34:59 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "raid5f",' ' "aliases": [' ' "6ff75ac9-cb57-4364-bc53-716c423b3d14"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "6ff75ac9-cb57-4364-bc53-716c423b3d14",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "raid": {' ' "uuid": "6ff75ac9-cb57-4364-bc53-716c423b3d14",' ' "strip_size_kb": 2,' ' "state": "online",' ' "raid_level": "raid5f",' ' "superblock": false,' ' "num_base_bdevs": 3,' ' "num_base_bdevs_discovered": 3,' ' "num_base_bdevs_operational": 3,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc0",' ' "uuid": "4887e3b3-b72f-45a8-9d5c-95ee9deba456",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc1",' ' "uuid": "5972877b-4ad1-4c75-963c-b22b0751c4a2",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc2",' ' "uuid": "46ad9bdc-255d-4f98-a565-0795cdcdcaa7",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' 00:17:20.566 23:34:59 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n '' ]] 00:17:20.566 23:34:59 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@360 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:17:20.566 /home/vagrant/spdk_repo/spdk 00:17:20.566 23:34:59 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@361 -- # popd 00:17:20.566 23:34:59 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@362 -- # trap - SIGINT SIGTERM EXIT 00:17:20.566 23:34:59 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@363 -- # return 0 00:17:20.566 00:17:20.566 real 0m11.697s 00:17:20.566 user 0m11.737s 00:17:20.566 sys 0m0.819s 00:17:20.566 23:34:59 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:20.566 23:34:59 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:17:20.566 ************************************ 00:17:20.566 END TEST bdev_fio 00:17:20.566 ************************************ 00:17:20.566 23:34:59 blockdev_raid5f -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:17:20.566 23:34:59 blockdev_raid5f -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:17:20.566 23:34:59 blockdev_raid5f -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:17:20.566 23:34:59 blockdev_raid5f -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:20.566 23:34:59 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:17:20.566 ************************************ 00:17:20.566 START TEST bdev_verify 00:17:20.566 ************************************ 00:17:20.566 23:34:59 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:17:20.566 [2024-09-30 23:34:59.619162] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:17:20.566 [2024-09-30 23:34:59.619919] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100787 ] 00:17:20.566 [2024-09-30 23:34:59.782799] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:17:20.566 [2024-09-30 23:34:59.862049] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:17:20.566 [2024-09-30 23:34:59.862159] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:17:20.566 Running I/O for 5 seconds... 00:17:25.293 11183.00 IOPS, 43.68 MiB/s 11346.50 IOPS, 44.32 MiB/s 11333.67 IOPS, 44.27 MiB/s 11351.50 IOPS, 44.34 MiB/s 11324.80 IOPS, 44.24 MiB/s 00:17:25.293 Latency(us) 00:17:25.293 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:25.293 Job: raid5f (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:17:25.293 Verification LBA range: start 0x0 length 0x2000 00:17:25.293 raid5f : 5.01 6787.96 26.52 0.00 0.00 28339.22 232.52 20376.26 00:17:25.293 Job: raid5f (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:17:25.293 Verification LBA range: start 0x2000 length 0x2000 00:17:25.293 raid5f : 5.02 4526.25 17.68 0.00 0.00 42368.17 232.52 31823.59 00:17:25.293 =================================================================================================================== 00:17:25.293 Total : 11314.21 44.20 0.00 0.00 33957.03 232.52 31823.59 00:17:25.862 00:17:25.862 real 0m6.011s 00:17:25.862 user 0m11.021s 00:17:25.862 sys 0m0.337s 00:17:25.862 23:35:05 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:25.862 23:35:05 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:17:25.862 ************************************ 00:17:25.862 END TEST bdev_verify 00:17:25.862 ************************************ 00:17:25.862 23:35:05 blockdev_raid5f -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:17:25.862 23:35:05 blockdev_raid5f -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:17:25.862 23:35:05 blockdev_raid5f -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:25.862 23:35:05 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:17:25.862 ************************************ 00:17:25.862 START TEST bdev_verify_big_io 00:17:25.862 ************************************ 00:17:25.862 23:35:05 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:17:25.862 [2024-09-30 23:35:05.701362] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:17:25.862 [2024-09-30 23:35:05.701482] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100873 ] 00:17:26.121 [2024-09-30 23:35:05.867728] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:17:26.122 [2024-09-30 23:35:05.943317] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:17:26.122 [2024-09-30 23:35:05.943425] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:17:26.380 Running I/O for 5 seconds... 00:17:31.770 633.00 IOPS, 39.56 MiB/s 761.00 IOPS, 47.56 MiB/s 782.00 IOPS, 48.88 MiB/s 793.25 IOPS, 49.58 MiB/s 799.40 IOPS, 49.96 MiB/s 00:17:31.770 Latency(us) 00:17:31.770 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:31.770 Job: raid5f (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:17:31.770 Verification LBA range: start 0x0 length 0x200 00:17:31.770 raid5f : 5.19 464.90 29.06 0.00 0.00 6873686.06 212.85 302209.68 00:17:31.770 Job: raid5f (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:17:31.770 Verification LBA range: start 0x200 length 0x200 00:17:31.770 raid5f : 5.30 358.74 22.42 0.00 0.00 8798339.34 222.69 375472.63 00:17:31.770 =================================================================================================================== 00:17:31.770 Total : 823.64 51.48 0.00 0.00 7722299.40 212.85 375472.63 00:17:32.339 00:17:32.339 real 0m6.299s 00:17:32.339 user 0m11.578s 00:17:32.339 sys 0m0.352s 00:17:32.339 23:35:11 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:32.339 23:35:11 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:17:32.339 ************************************ 00:17:32.339 END TEST bdev_verify_big_io 00:17:32.339 ************************************ 00:17:32.339 23:35:11 blockdev_raid5f -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:17:32.339 23:35:11 blockdev_raid5f -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:17:32.339 23:35:11 blockdev_raid5f -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:32.339 23:35:11 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:17:32.339 ************************************ 00:17:32.339 START TEST bdev_write_zeroes 00:17:32.339 ************************************ 00:17:32.339 23:35:11 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:17:32.339 [2024-09-30 23:35:12.074752] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:17:32.339 [2024-09-30 23:35:12.074891] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100956 ] 00:17:32.599 [2024-09-30 23:35:12.234181] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:32.599 [2024-09-30 23:35:12.319100] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:17:32.859 Running I/O for 1 seconds... 00:17:33.798 30063.00 IOPS, 117.43 MiB/s 00:17:33.798 Latency(us) 00:17:33.798 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:33.798 Job: raid5f (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:17:33.798 raid5f : 1.01 30028.31 117.30 0.00 0.00 4250.65 1430.92 5780.90 00:17:33.798 =================================================================================================================== 00:17:33.798 Total : 30028.31 117.30 0.00 0.00 4250.65 1430.92 5780.90 00:17:34.368 00:17:34.368 real 0m2.023s 00:17:34.368 user 0m1.562s 00:17:34.368 sys 0m0.331s 00:17:34.368 23:35:14 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:34.368 23:35:14 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:17:34.368 ************************************ 00:17:34.368 END TEST bdev_write_zeroes 00:17:34.368 ************************************ 00:17:34.368 23:35:14 blockdev_raid5f -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:17:34.368 23:35:14 blockdev_raid5f -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:17:34.368 23:35:14 blockdev_raid5f -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:34.368 23:35:14 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:17:34.368 ************************************ 00:17:34.368 START TEST bdev_json_nonenclosed 00:17:34.368 ************************************ 00:17:34.368 23:35:14 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:17:34.368 [2024-09-30 23:35:14.175994] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:17:34.368 [2024-09-30 23:35:14.176114] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100998 ] 00:17:34.628 [2024-09-30 23:35:14.335264] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:34.628 [2024-09-30 23:35:14.421010] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:17:34.628 [2024-09-30 23:35:14.421142] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:17:34.628 [2024-09-30 23:35:14.421171] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:17:34.628 [2024-09-30 23:35:14.421185] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:17:34.887 00:17:34.887 real 0m0.485s 00:17:34.887 user 0m0.220s 00:17:34.887 sys 0m0.160s 00:17:34.887 23:35:14 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:34.887 23:35:14 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:17:34.887 ************************************ 00:17:34.887 END TEST bdev_json_nonenclosed 00:17:34.887 ************************************ 00:17:34.887 23:35:14 blockdev_raid5f -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:17:34.887 23:35:14 blockdev_raid5f -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:17:34.887 23:35:14 blockdev_raid5f -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:34.887 23:35:14 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:17:34.887 ************************************ 00:17:34.887 START TEST bdev_json_nonarray 00:17:34.887 ************************************ 00:17:34.887 23:35:14 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:17:34.888 [2024-09-30 23:35:14.732026] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 23.11.0 initialization... 00:17:34.888 [2024-09-30 23:35:14.732138] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid101028 ] 00:17:35.147 [2024-09-30 23:35:14.895655] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:35.147 [2024-09-30 23:35:14.968673] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:17:35.147 [2024-09-30 23:35:14.968804] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:17:35.147 [2024-09-30 23:35:14.968833] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:17:35.147 [2024-09-30 23:35:14.968846] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:17:35.407 00:17:35.407 real 0m0.480s 00:17:35.407 user 0m0.226s 00:17:35.407 sys 0m0.150s 00:17:35.407 23:35:15 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:35.407 23:35:15 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:17:35.407 ************************************ 00:17:35.407 END TEST bdev_json_nonarray 00:17:35.407 ************************************ 00:17:35.407 23:35:15 blockdev_raid5f -- bdev/blockdev.sh@786 -- # [[ raid5f == bdev ]] 00:17:35.407 23:35:15 blockdev_raid5f -- bdev/blockdev.sh@793 -- # [[ raid5f == gpt ]] 00:17:35.407 23:35:15 blockdev_raid5f -- bdev/blockdev.sh@797 -- # [[ raid5f == crypto_sw ]] 00:17:35.407 23:35:15 blockdev_raid5f -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:17:35.407 23:35:15 blockdev_raid5f -- bdev/blockdev.sh@810 -- # cleanup 00:17:35.407 23:35:15 blockdev_raid5f -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:17:35.407 23:35:15 blockdev_raid5f -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:17:35.407 23:35:15 blockdev_raid5f -- bdev/blockdev.sh@26 -- # [[ raid5f == rbd ]] 00:17:35.407 23:35:15 blockdev_raid5f -- bdev/blockdev.sh@30 -- # [[ raid5f == daos ]] 00:17:35.407 23:35:15 blockdev_raid5f -- bdev/blockdev.sh@34 -- # [[ raid5f = \g\p\t ]] 00:17:35.407 23:35:15 blockdev_raid5f -- bdev/blockdev.sh@40 -- # [[ raid5f == xnvme ]] 00:17:35.407 00:17:35.407 real 0m36.888s 00:17:35.407 user 0m48.894s 00:17:35.407 sys 0m5.426s 00:17:35.407 23:35:15 blockdev_raid5f -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:35.407 23:35:15 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:17:35.407 ************************************ 00:17:35.407 END TEST blockdev_raid5f 00:17:35.407 ************************************ 00:17:35.667 23:35:15 -- spdk/autotest.sh@194 -- # uname -s 00:17:35.667 23:35:15 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:17:35.667 23:35:15 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:17:35.667 23:35:15 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:17:35.667 23:35:15 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:17:35.667 23:35:15 -- spdk/autotest.sh@252 -- # '[' 0 -eq 1 ']' 00:17:35.667 23:35:15 -- spdk/autotest.sh@256 -- # timing_exit lib 00:17:35.667 23:35:15 -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:35.667 23:35:15 -- common/autotest_common.sh@10 -- # set +x 00:17:35.667 23:35:15 -- spdk/autotest.sh@258 -- # '[' 0 -eq 1 ']' 00:17:35.667 23:35:15 -- spdk/autotest.sh@263 -- # '[' 0 -eq 1 ']' 00:17:35.667 23:35:15 -- spdk/autotest.sh@272 -- # '[' 0 -eq 1 ']' 00:17:35.667 23:35:15 -- spdk/autotest.sh@307 -- # '[' 0 -eq 1 ']' 00:17:35.667 23:35:15 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:17:35.667 23:35:15 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:17:35.667 23:35:15 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:17:35.667 23:35:15 -- spdk/autotest.sh@329 -- # '[' 0 -eq 1 ']' 00:17:35.667 23:35:15 -- spdk/autotest.sh@334 -- # '[' 0 -eq 1 ']' 00:17:35.667 23:35:15 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:17:35.667 23:35:15 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:17:35.667 23:35:15 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:17:35.667 23:35:15 -- spdk/autotest.sh@351 -- # '[' 0 -eq 1 ']' 00:17:35.667 23:35:15 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:17:35.667 23:35:15 -- spdk/autotest.sh@362 -- # [[ 0 -eq 1 ]] 00:17:35.667 23:35:15 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:17:35.667 23:35:15 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:17:35.667 23:35:15 -- spdk/autotest.sh@374 -- # [[ '' -eq 1 ]] 00:17:35.667 23:35:15 -- spdk/autotest.sh@381 -- # trap - SIGINT SIGTERM EXIT 00:17:35.667 23:35:15 -- spdk/autotest.sh@383 -- # timing_enter post_cleanup 00:17:35.667 23:35:15 -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:35.667 23:35:15 -- common/autotest_common.sh@10 -- # set +x 00:17:35.667 23:35:15 -- spdk/autotest.sh@384 -- # autotest_cleanup 00:17:35.667 23:35:15 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:17:35.667 23:35:15 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:17:35.667 23:35:15 -- common/autotest_common.sh@10 -- # set +x 00:17:38.207 INFO: APP EXITING 00:17:38.207 INFO: killing all VMs 00:17:38.207 INFO: killing vhost app 00:17:38.207 INFO: EXIT DONE 00:17:38.467 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:17:38.467 Waiting for block devices as requested 00:17:38.467 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:17:38.727 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:17:39.666 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:17:39.666 Cleaning 00:17:39.666 Removing: /var/run/dpdk/spdk0/config 00:17:39.666 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:17:39.666 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:17:39.666 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:17:39.666 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:17:39.666 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:17:39.666 Removing: /var/run/dpdk/spdk0/hugepage_info 00:17:39.666 Removing: /dev/shm/spdk_tgt_trace.pid69151 00:17:39.666 Removing: /var/run/dpdk/spdk0 00:17:39.666 Removing: /var/run/dpdk/spdk_pid100060 00:17:39.666 Removing: /var/run/dpdk/spdk_pid100325 00:17:39.666 Removing: /var/run/dpdk/spdk_pid100360 00:17:39.666 Removing: /var/run/dpdk/spdk_pid100391 00:17:39.666 Removing: /var/run/dpdk/spdk_pid100625 00:17:39.666 Removing: /var/run/dpdk/spdk_pid100787 00:17:39.666 Removing: /var/run/dpdk/spdk_pid100873 00:17:39.666 Removing: /var/run/dpdk/spdk_pid100956 00:17:39.666 Removing: /var/run/dpdk/spdk_pid100998 00:17:39.666 Removing: /var/run/dpdk/spdk_pid101028 00:17:39.666 Removing: /var/run/dpdk/spdk_pid68988 00:17:39.666 Removing: /var/run/dpdk/spdk_pid69151 00:17:39.666 Removing: /var/run/dpdk/spdk_pid69358 00:17:39.666 Removing: /var/run/dpdk/spdk_pid69446 00:17:39.666 Removing: /var/run/dpdk/spdk_pid69469 00:17:39.666 Removing: /var/run/dpdk/spdk_pid69586 00:17:39.666 Removing: /var/run/dpdk/spdk_pid69604 00:17:39.666 Removing: /var/run/dpdk/spdk_pid69792 00:17:39.666 Removing: /var/run/dpdk/spdk_pid69860 00:17:39.926 Removing: /var/run/dpdk/spdk_pid69945 00:17:39.926 Removing: /var/run/dpdk/spdk_pid70045 00:17:39.926 Removing: /var/run/dpdk/spdk_pid70125 00:17:39.926 Removing: /var/run/dpdk/spdk_pid70165 00:17:39.926 Removing: /var/run/dpdk/spdk_pid70196 00:17:39.926 Removing: /var/run/dpdk/spdk_pid70272 00:17:39.926 Removing: /var/run/dpdk/spdk_pid70378 00:17:39.926 Removing: /var/run/dpdk/spdk_pid70798 00:17:39.926 Removing: /var/run/dpdk/spdk_pid70851 00:17:39.926 Removing: /var/run/dpdk/spdk_pid70903 00:17:39.926 Removing: /var/run/dpdk/spdk_pid70919 00:17:39.926 Removing: /var/run/dpdk/spdk_pid70990 00:17:39.926 Removing: /var/run/dpdk/spdk_pid71006 00:17:39.926 Removing: /var/run/dpdk/spdk_pid71064 00:17:39.926 Removing: /var/run/dpdk/spdk_pid71080 00:17:39.926 Removing: /var/run/dpdk/spdk_pid71133 00:17:39.926 Removing: /var/run/dpdk/spdk_pid71140 00:17:39.926 Removing: /var/run/dpdk/spdk_pid71193 00:17:39.926 Removing: /var/run/dpdk/spdk_pid71210 00:17:39.926 Removing: /var/run/dpdk/spdk_pid71346 00:17:39.926 Removing: /var/run/dpdk/spdk_pid71382 00:17:39.926 Removing: /var/run/dpdk/spdk_pid71466 00:17:39.926 Removing: /var/run/dpdk/spdk_pid72640 00:17:39.926 Removing: /var/run/dpdk/spdk_pid72846 00:17:39.926 Removing: /var/run/dpdk/spdk_pid72975 00:17:39.926 Removing: /var/run/dpdk/spdk_pid73580 00:17:39.926 Removing: /var/run/dpdk/spdk_pid73775 00:17:39.926 Removing: /var/run/dpdk/spdk_pid73908 00:17:39.926 Removing: /var/run/dpdk/spdk_pid74514 00:17:39.926 Removing: /var/run/dpdk/spdk_pid74828 00:17:39.926 Removing: /var/run/dpdk/spdk_pid74961 00:17:39.926 Removing: /var/run/dpdk/spdk_pid76292 00:17:39.926 Removing: /var/run/dpdk/spdk_pid76534 00:17:39.926 Removing: /var/run/dpdk/spdk_pid76663 00:17:39.926 Removing: /var/run/dpdk/spdk_pid78003 00:17:39.926 Removing: /var/run/dpdk/spdk_pid78235 00:17:39.926 Removing: /var/run/dpdk/spdk_pid78370 00:17:39.926 Removing: /var/run/dpdk/spdk_pid79705 00:17:39.926 Removing: /var/run/dpdk/spdk_pid80144 00:17:39.926 Removing: /var/run/dpdk/spdk_pid80274 00:17:39.926 Removing: /var/run/dpdk/spdk_pid81704 00:17:39.926 Removing: /var/run/dpdk/spdk_pid81951 00:17:39.926 Removing: /var/run/dpdk/spdk_pid82081 00:17:39.926 Removing: /var/run/dpdk/spdk_pid83506 00:17:39.926 Removing: /var/run/dpdk/spdk_pid83754 00:17:39.926 Removing: /var/run/dpdk/spdk_pid83889 00:17:39.926 Removing: /var/run/dpdk/spdk_pid85319 00:17:39.926 Removing: /var/run/dpdk/spdk_pid85795 00:17:39.926 Removing: /var/run/dpdk/spdk_pid85924 00:17:39.926 Removing: /var/run/dpdk/spdk_pid86051 00:17:39.926 Removing: /var/run/dpdk/spdk_pid86469 00:17:39.926 Removing: /var/run/dpdk/spdk_pid87183 00:17:39.926 Removing: /var/run/dpdk/spdk_pid87548 00:17:39.926 Removing: /var/run/dpdk/spdk_pid88229 00:17:40.186 Removing: /var/run/dpdk/spdk_pid88654 00:17:40.186 Removing: /var/run/dpdk/spdk_pid89391 00:17:40.186 Removing: /var/run/dpdk/spdk_pid89791 00:17:40.186 Removing: /var/run/dpdk/spdk_pid91707 00:17:40.186 Removing: /var/run/dpdk/spdk_pid92134 00:17:40.186 Removing: /var/run/dpdk/spdk_pid92559 00:17:40.186 Removing: /var/run/dpdk/spdk_pid94588 00:17:40.186 Removing: /var/run/dpdk/spdk_pid95059 00:17:40.186 Removing: /var/run/dpdk/spdk_pid95564 00:17:40.186 Removing: /var/run/dpdk/spdk_pid96603 00:17:40.186 Removing: /var/run/dpdk/spdk_pid96920 00:17:40.186 Removing: /var/run/dpdk/spdk_pid97835 00:17:40.186 Removing: /var/run/dpdk/spdk_pid98154 00:17:40.186 Removing: /var/run/dpdk/spdk_pid99072 00:17:40.186 Removing: /var/run/dpdk/spdk_pid99389 00:17:40.186 Clean 00:17:40.186 23:35:19 -- common/autotest_common.sh@1451 -- # return 0 00:17:40.186 23:35:19 -- spdk/autotest.sh@385 -- # timing_exit post_cleanup 00:17:40.186 23:35:19 -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:40.186 23:35:19 -- common/autotest_common.sh@10 -- # set +x 00:17:40.186 23:35:19 -- spdk/autotest.sh@387 -- # timing_exit autotest 00:17:40.186 23:35:19 -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:40.186 23:35:19 -- common/autotest_common.sh@10 -- # set +x 00:17:40.186 23:35:20 -- spdk/autotest.sh@388 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:17:40.446 23:35:20 -- spdk/autotest.sh@390 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:17:40.446 23:35:20 -- spdk/autotest.sh@390 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:17:40.446 23:35:20 -- spdk/autotest.sh@392 -- # [[ y == y ]] 00:17:40.446 23:35:20 -- spdk/autotest.sh@394 -- # hostname 00:17:40.446 23:35:20 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:17:40.446 geninfo: WARNING: invalid characters removed from testname! 00:18:07.011 23:35:44 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:18:07.011 23:35:46 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:18:08.922 23:35:48 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:18:10.831 23:35:50 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:18:12.738 23:35:52 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:18:14.724 23:35:54 -- spdk/autotest.sh@403 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:18:16.694 23:35:56 -- spdk/autotest.sh@404 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:18:16.694 23:35:56 -- common/autotest_common.sh@1680 -- $ [[ y == y ]] 00:18:16.694 23:35:56 -- common/autotest_common.sh@1681 -- $ lcov --version 00:18:16.694 23:35:56 -- common/autotest_common.sh@1681 -- $ awk '{print $NF}' 00:18:16.694 23:35:56 -- common/autotest_common.sh@1681 -- $ lt 1.15 2 00:18:16.694 23:35:56 -- scripts/common.sh@373 -- $ cmp_versions 1.15 '<' 2 00:18:16.694 23:35:56 -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:18:16.694 23:35:56 -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:18:16.694 23:35:56 -- scripts/common.sh@336 -- $ IFS=.-: 00:18:16.694 23:35:56 -- scripts/common.sh@336 -- $ read -ra ver1 00:18:16.694 23:35:56 -- scripts/common.sh@337 -- $ IFS=.-: 00:18:16.694 23:35:56 -- scripts/common.sh@337 -- $ read -ra ver2 00:18:16.694 23:35:56 -- scripts/common.sh@338 -- $ local 'op=<' 00:18:16.694 23:35:56 -- scripts/common.sh@340 -- $ ver1_l=2 00:18:16.694 23:35:56 -- scripts/common.sh@341 -- $ ver2_l=1 00:18:16.694 23:35:56 -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:18:16.694 23:35:56 -- scripts/common.sh@344 -- $ case "$op" in 00:18:16.694 23:35:56 -- scripts/common.sh@345 -- $ : 1 00:18:16.694 23:35:56 -- scripts/common.sh@364 -- $ (( v = 0 )) 00:18:16.694 23:35:56 -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:16.694 23:35:56 -- scripts/common.sh@365 -- $ decimal 1 00:18:16.694 23:35:56 -- scripts/common.sh@353 -- $ local d=1 00:18:16.694 23:35:56 -- scripts/common.sh@354 -- $ [[ 1 =~ ^[0-9]+$ ]] 00:18:16.694 23:35:56 -- scripts/common.sh@355 -- $ echo 1 00:18:16.694 23:35:56 -- scripts/common.sh@365 -- $ ver1[v]=1 00:18:16.694 23:35:56 -- scripts/common.sh@366 -- $ decimal 2 00:18:16.694 23:35:56 -- scripts/common.sh@353 -- $ local d=2 00:18:16.694 23:35:56 -- scripts/common.sh@354 -- $ [[ 2 =~ ^[0-9]+$ ]] 00:18:16.694 23:35:56 -- scripts/common.sh@355 -- $ echo 2 00:18:16.694 23:35:56 -- scripts/common.sh@366 -- $ ver2[v]=2 00:18:16.694 23:35:56 -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:18:16.694 23:35:56 -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:18:16.694 23:35:56 -- scripts/common.sh@368 -- $ return 0 00:18:16.694 23:35:56 -- common/autotest_common.sh@1682 -- $ lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:16.694 23:35:56 -- common/autotest_common.sh@1694 -- $ export 'LCOV_OPTS= 00:18:16.694 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:16.694 --rc genhtml_branch_coverage=1 00:18:16.694 --rc genhtml_function_coverage=1 00:18:16.694 --rc genhtml_legend=1 00:18:16.694 --rc geninfo_all_blocks=1 00:18:16.694 --rc geninfo_unexecuted_blocks=1 00:18:16.694 00:18:16.694 ' 00:18:16.694 23:35:56 -- common/autotest_common.sh@1694 -- $ LCOV_OPTS=' 00:18:16.694 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:16.694 --rc genhtml_branch_coverage=1 00:18:16.694 --rc genhtml_function_coverage=1 00:18:16.694 --rc genhtml_legend=1 00:18:16.694 --rc geninfo_all_blocks=1 00:18:16.694 --rc geninfo_unexecuted_blocks=1 00:18:16.694 00:18:16.694 ' 00:18:16.694 23:35:56 -- common/autotest_common.sh@1695 -- $ export 'LCOV=lcov 00:18:16.694 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:16.694 --rc genhtml_branch_coverage=1 00:18:16.694 --rc genhtml_function_coverage=1 00:18:16.694 --rc genhtml_legend=1 00:18:16.694 --rc geninfo_all_blocks=1 00:18:16.694 --rc geninfo_unexecuted_blocks=1 00:18:16.694 00:18:16.694 ' 00:18:16.694 23:35:56 -- common/autotest_common.sh@1695 -- $ LCOV='lcov 00:18:16.694 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:16.694 --rc genhtml_branch_coverage=1 00:18:16.694 --rc genhtml_function_coverage=1 00:18:16.694 --rc genhtml_legend=1 00:18:16.694 --rc geninfo_all_blocks=1 00:18:16.694 --rc geninfo_unexecuted_blocks=1 00:18:16.694 00:18:16.694 ' 00:18:16.694 23:35:56 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:16.694 23:35:56 -- scripts/common.sh@15 -- $ shopt -s extglob 00:18:16.694 23:35:56 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:18:16.694 23:35:56 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:16.694 23:35:56 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:16.694 23:35:56 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:16.694 23:35:56 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:16.694 23:35:56 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:16.694 23:35:56 -- paths/export.sh@5 -- $ export PATH 00:18:16.695 23:35:56 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:16.695 23:35:56 -- common/autobuild_common.sh@478 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:18:16.695 23:35:56 -- common/autobuild_common.sh@479 -- $ date +%s 00:18:16.695 23:35:56 -- common/autobuild_common.sh@479 -- $ mktemp -dt spdk_1727739356.XXXXXX 00:18:16.695 23:35:56 -- common/autobuild_common.sh@479 -- $ SPDK_WORKSPACE=/tmp/spdk_1727739356.qmHMkS 00:18:16.695 23:35:56 -- common/autobuild_common.sh@481 -- $ [[ -n '' ]] 00:18:16.695 23:35:56 -- common/autobuild_common.sh@485 -- $ '[' -n v23.11 ']' 00:18:16.695 23:35:56 -- common/autobuild_common.sh@486 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:18:16.695 23:35:56 -- common/autobuild_common.sh@486 -- $ scanbuild_exclude=' --exclude /home/vagrant/spdk_repo/dpdk' 00:18:16.695 23:35:56 -- common/autobuild_common.sh@492 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:18:16.695 23:35:56 -- common/autobuild_common.sh@494 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/dpdk --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:18:16.695 23:35:56 -- common/autobuild_common.sh@495 -- $ get_config_params 00:18:16.695 23:35:56 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:18:16.695 23:35:56 -- common/autotest_common.sh@10 -- $ set +x 00:18:16.695 23:35:56 -- common/autobuild_common.sh@495 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-raid5f --with-dpdk=/home/vagrant/spdk_repo/dpdk/build' 00:18:16.695 23:35:56 -- common/autobuild_common.sh@497 -- $ start_monitor_resources 00:18:16.695 23:35:56 -- pm/common@17 -- $ local monitor 00:18:16.695 23:35:56 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:18:16.695 23:35:56 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:18:16.695 23:35:56 -- pm/common@25 -- $ sleep 1 00:18:16.695 23:35:56 -- pm/common@21 -- $ date +%s 00:18:16.695 23:35:56 -- pm/common@21 -- $ date +%s 00:18:16.695 23:35:56 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1727739356 00:18:16.695 23:35:56 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1727739356 00:18:16.695 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1727739356_collect-cpu-load.pm.log 00:18:16.695 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1727739356_collect-vmstat.pm.log 00:18:17.634 23:35:57 -- common/autobuild_common.sh@498 -- $ trap stop_monitor_resources EXIT 00:18:17.634 23:35:57 -- spdk/autopackage.sh@10 -- $ [[ 0 -eq 1 ]] 00:18:17.634 23:35:57 -- spdk/autopackage.sh@14 -- $ timing_finish 00:18:17.634 23:35:57 -- common/autotest_common.sh@736 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:18:17.634 23:35:57 -- common/autotest_common.sh@737 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:18:17.634 23:35:57 -- common/autotest_common.sh@740 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:18:17.634 23:35:57 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:18:17.634 23:35:57 -- pm/common@29 -- $ signal_monitor_resources TERM 00:18:17.634 23:35:57 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:18:17.634 23:35:57 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:18:17.634 23:35:57 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:18:17.634 23:35:57 -- pm/common@44 -- $ pid=102562 00:18:17.634 23:35:57 -- pm/common@50 -- $ kill -TERM 102562 00:18:17.634 23:35:57 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:18:17.634 23:35:57 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:18:17.634 23:35:57 -- pm/common@44 -- $ pid=102564 00:18:17.634 23:35:57 -- pm/common@50 -- $ kill -TERM 102564 00:18:17.634 + [[ -n 6161 ]] 00:18:17.634 + sudo kill 6161 00:18:17.903 [Pipeline] } 00:18:17.919 [Pipeline] // timeout 00:18:17.924 [Pipeline] } 00:18:17.940 [Pipeline] // stage 00:18:17.945 [Pipeline] } 00:18:17.958 [Pipeline] // catchError 00:18:17.967 [Pipeline] stage 00:18:17.968 [Pipeline] { (Stop VM) 00:18:17.980 [Pipeline] sh 00:18:18.263 + vagrant halt 00:18:20.171 ==> default: Halting domain... 00:18:28.312 [Pipeline] sh 00:18:28.594 + vagrant destroy -f 00:18:31.133 ==> default: Removing domain... 00:18:31.146 [Pipeline] sh 00:18:31.429 + mv output /var/jenkins/workspace/raid-vg-autotest/output 00:18:31.438 [Pipeline] } 00:18:31.452 [Pipeline] // stage 00:18:31.458 [Pipeline] } 00:18:31.471 [Pipeline] // dir 00:18:31.477 [Pipeline] } 00:18:31.491 [Pipeline] // wrap 00:18:31.497 [Pipeline] } 00:18:31.509 [Pipeline] // catchError 00:18:31.517 [Pipeline] stage 00:18:31.519 [Pipeline] { (Epilogue) 00:18:31.533 [Pipeline] sh 00:18:31.819 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:18:36.036 [Pipeline] catchError 00:18:36.038 [Pipeline] { 00:18:36.050 [Pipeline] sh 00:18:36.332 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:18:36.332 Artifacts sizes are good 00:18:36.341 [Pipeline] } 00:18:36.351 [Pipeline] // catchError 00:18:36.361 [Pipeline] archiveArtifacts 00:18:36.369 Archiving artifacts 00:18:36.480 [Pipeline] cleanWs 00:18:36.490 [WS-CLEANUP] Deleting project workspace... 00:18:36.490 [WS-CLEANUP] Deferred wipeout is used... 00:18:36.496 [WS-CLEANUP] done 00:18:36.498 [Pipeline] } 00:18:36.512 [Pipeline] // stage 00:18:36.517 [Pipeline] } 00:18:36.530 [Pipeline] // node 00:18:36.535 [Pipeline] End of Pipeline 00:18:36.581 Finished: SUCCESS